Test Report: Docker_Linux_crio_arm64 21625

                    
                      f5ddb069c61c98d891ee28fed061fe1ee97ea306:2025-10-03:41753
                    
                

Test fail (38/326)

Order failed test Duration
29 TestAddons/serial/Volcano 0.34
35 TestAddons/parallel/Registry 16.77
36 TestAddons/parallel/RegistryCreds 0.51
37 TestAddons/parallel/Ingress 144.9
38 TestAddons/parallel/InspektorGadget 6.26
39 TestAddons/parallel/MetricsServer 5.36
41 TestAddons/parallel/CSI 40.32
42 TestAddons/parallel/Headlamp 3.16
43 TestAddons/parallel/CloudSpanner 5.26
44 TestAddons/parallel/LocalPath 9.44
45 TestAddons/parallel/NvidiaDevicePlugin 5.31
46 TestAddons/parallel/Yakd 6.26
52 TestForceSystemdFlag 516.68
53 TestForceSystemdEnv 513.5
97 TestFunctional/parallel/ServiceCmdConnect 603.49
125 TestFunctional/parallel/ServiceCmd/DeployApp 600.86
134 TestFunctional/parallel/ServiceCmd/HTTPS 0.51
135 TestFunctional/parallel/ServiceCmd/Format 0.53
136 TestFunctional/parallel/ServiceCmd/URL 0.5
148 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.17
149 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.22
150 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.25
151 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.43
153 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.19
154 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.36
190 TestJSONOutput/pause/Command 2.57
196 TestJSONOutput/unpause/Command 1.6
280 TestPause/serial/Pause 6.05
295 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 4.37
305 TestStartStop/group/old-k8s-version/serial/Pause 8.17
306 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 3.17
315 TestStartStop/group/no-preload/serial/Pause 6.55
319 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.71
326 TestStartStop/group/embed-certs/serial/Pause 6.4
330 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 3.03
335 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 3.17
342 TestStartStop/group/newest-cni/serial/Pause 6.43
347 TestStartStop/group/default-k8s-diff-port/serial/Pause 7.16
x
+
TestAddons/serial/Volcano (0.34s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-952140 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-952140 addons disable volcano --alsologtostderr -v=1: exit status 11 (336.828723ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 18:29:44.563109  292997 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:29:44.563969  292997 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:29:44.564005  292997 out.go:374] Setting ErrFile to fd 2...
	I1003 18:29:44.564024  292997 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:29:44.564333  292997 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-284583/.minikube/bin
	I1003 18:29:44.564666  292997 mustload.go:65] Loading cluster: addons-952140
	I1003 18:29:44.565153  292997 config.go:182] Loaded profile config "addons-952140": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:29:44.565192  292997 addons.go:606] checking whether the cluster is paused
	I1003 18:29:44.565339  292997 config.go:182] Loaded profile config "addons-952140": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:29:44.565371  292997 host.go:66] Checking if "addons-952140" exists ...
	I1003 18:29:44.565860  292997 cli_runner.go:164] Run: docker container inspect addons-952140 --format={{.State.Status}}
	I1003 18:29:44.604612  292997 ssh_runner.go:195] Run: systemctl --version
	I1003 18:29:44.604671  292997 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952140
	I1003 18:29:44.623411  292997 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/addons-952140/id_rsa Username:docker}
	I1003 18:29:44.723977  292997 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1003 18:29:44.724117  292997 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1003 18:29:44.752074  292997 cri.go:89] found id: "a2a54b8525b1b03c3294a286e260174ebeb999c736e2e29750632824346e2b8a"
	I1003 18:29:44.752095  292997 cri.go:89] found id: "764f61b1d1b52574dff121ce3057ed9a2791b059752cb80d76e6a5ae323e3765"
	I1003 18:29:44.752101  292997 cri.go:89] found id: "5520f176a27b0060104f01c653743a97419cd7df90b959dc02f4359563db372f"
	I1003 18:29:44.752105  292997 cri.go:89] found id: "a55dd027b4c2417a4e857716af2ec80adf3ee359efc1fcdea96ae017da8094db"
	I1003 18:29:44.752108  292997 cri.go:89] found id: "d11765424ad977c42ad7828e106df59281b6041a6b85d34d604738d051cc2257"
	I1003 18:29:44.752112  292997 cri.go:89] found id: "ba5695d849b4ff437b5c5a4c73351652ea5b855eb0061d3826ad4a2a76513650"
	I1003 18:29:44.752116  292997 cri.go:89] found id: "351cf9cd8e8f80a1ce058ad47867cc1e9e314f2100ba10ef01326c91fbea576c"
	I1003 18:29:44.752119  292997 cri.go:89] found id: "c2d0db82bc7f2bcfc4af04f3633a094c0e554392449fbf12a24ed377b92f941b"
	I1003 18:29:44.752122  292997 cri.go:89] found id: "5925d6c423d79839f9eb8870977fb293e3c6b1ece77aa59bf7c2a4b120ca3ad3"
	I1003 18:29:44.752133  292997 cri.go:89] found id: "228036e3d30218b16026d557d3264fc361f0c7c42c143fc93a96fd7945d8bdf3"
	I1003 18:29:44.752137  292997 cri.go:89] found id: "d38c57e36e3594ef4f8f3d28db24890c659027ed75977701aa969ce142c27e0e"
	I1003 18:29:44.752140  292997 cri.go:89] found id: "8ab3974a2c302b83e53bc5a243fae87bdec8ed1ca2da979ebcc29dabb8f30fc4"
	I1003 18:29:44.752144  292997 cri.go:89] found id: "70497b5707570324a85bde79dadf41e8e6ded9bd45545ee1a7756ba32eed86d6"
	I1003 18:29:44.752148  292997 cri.go:89] found id: "26742750260bfb48e7909f410307ee53b3dafe6b84bb3a467c505e24d28d4fe1"
	I1003 18:29:44.752151  292997 cri.go:89] found id: "7099c81ca982b78bfa4dd5784e69f027f40fb02b99bce69ec1f792090be6a50b"
	I1003 18:29:44.752159  292997 cri.go:89] found id: "2657f869bb8529138f74b802beedcd922a626ac30c50e54c72731eaff1b930c0"
	I1003 18:29:44.752166  292997 cri.go:89] found id: "82907fef03cc43b849878194de7aef8c729ee89dcf5fddba29650a239ab81e90"
	I1003 18:29:44.752171  292997 cri.go:89] found id: "28257b7548dee5496025c494fc69f7d27b158c004459fe9cf7e145244cc402b4"
	I1003 18:29:44.752174  292997 cri.go:89] found id: "1a59139ec0face1693267071ca3c3ba3e8eff397418ffbf25f3682c68eee244a"
	I1003 18:29:44.752177  292997 cri.go:89] found id: "23bd53ece83d04d894e5fc60fda04a6f8bdfe8d6c59ffad6c4dcacc168ec4ed8"
	I1003 18:29:44.752183  292997 cri.go:89] found id: "1cbcaf90a28158f2a4d5495c4b92561650195912704daec05dcf1d9b56429e5c"
	I1003 18:29:44.752186  292997 cri.go:89] found id: "22981c6dff74a1d10571b76dae9b7bbbb33ca3843ab35927e1e5997100c5be1c"
	I1003 18:29:44.752189  292997 cri.go:89] found id: "e937e437e1e79c6bcbb92c82ee9849b6f8ceb2c5980d23b084e27a6fb88ab45a"
	I1003 18:29:44.752192  292997 cri.go:89] found id: ""
	I1003 18:29:44.752241  292997 ssh_runner.go:195] Run: sudo runc list -f json
	I1003 18:29:44.766540  292997 out.go:203] 
	W1003 18:29:44.769418  292997 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-03T18:29:44Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-03T18:29:44Z" level=error msg="open /run/runc: no such file or directory"
	
	W1003 18:29:44.769457  292997 out.go:285] * 
	* 
	W1003 18:29:44.810307  292997 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 18:29:44.813422  292997 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-arm64 -p addons-952140 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.34s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.77s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 6.613208ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-88sgc" [749ffc38-9d67-4777-b96d-422ce39f2b46] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.010644379s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-4nwwr" [5ad2d6c8-13b3-4729-a243-b2881c6c7d2b] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.00381912s
addons_test.go:392: (dbg) Run:  kubectl --context addons-952140 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-952140 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-952140 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.232843966s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-952140 ip
2025/10/03 18:30:10 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-952140 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-952140 addons disable registry --alsologtostderr -v=1: exit status 11 (261.16783ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 18:30:10.807584  294022 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:30:10.808385  294022 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:30:10.808422  294022 out.go:374] Setting ErrFile to fd 2...
	I1003 18:30:10.808441  294022 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:30:10.808786  294022 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-284583/.minikube/bin
	I1003 18:30:10.809115  294022 mustload.go:65] Loading cluster: addons-952140
	I1003 18:30:10.809525  294022 config.go:182] Loaded profile config "addons-952140": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:30:10.809570  294022 addons.go:606] checking whether the cluster is paused
	I1003 18:30:10.809704  294022 config.go:182] Loaded profile config "addons-952140": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:30:10.809742  294022 host.go:66] Checking if "addons-952140" exists ...
	I1003 18:30:10.810266  294022 cli_runner.go:164] Run: docker container inspect addons-952140 --format={{.State.Status}}
	I1003 18:30:10.828603  294022 ssh_runner.go:195] Run: systemctl --version
	I1003 18:30:10.828661  294022 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952140
	I1003 18:30:10.852699  294022 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/addons-952140/id_rsa Username:docker}
	I1003 18:30:10.947158  294022 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1003 18:30:10.947240  294022 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1003 18:30:10.974876  294022 cri.go:89] found id: "a2a54b8525b1b03c3294a286e260174ebeb999c736e2e29750632824346e2b8a"
	I1003 18:30:10.974898  294022 cri.go:89] found id: "764f61b1d1b52574dff121ce3057ed9a2791b059752cb80d76e6a5ae323e3765"
	I1003 18:30:10.974903  294022 cri.go:89] found id: "5520f176a27b0060104f01c653743a97419cd7df90b959dc02f4359563db372f"
	I1003 18:30:10.974907  294022 cri.go:89] found id: "a55dd027b4c2417a4e857716af2ec80adf3ee359efc1fcdea96ae017da8094db"
	I1003 18:30:10.974910  294022 cri.go:89] found id: "d11765424ad977c42ad7828e106df59281b6041a6b85d34d604738d051cc2257"
	I1003 18:30:10.974914  294022 cri.go:89] found id: "ba5695d849b4ff437b5c5a4c73351652ea5b855eb0061d3826ad4a2a76513650"
	I1003 18:30:10.974917  294022 cri.go:89] found id: "351cf9cd8e8f80a1ce058ad47867cc1e9e314f2100ba10ef01326c91fbea576c"
	I1003 18:30:10.974920  294022 cri.go:89] found id: "c2d0db82bc7f2bcfc4af04f3633a094c0e554392449fbf12a24ed377b92f941b"
	I1003 18:30:10.974923  294022 cri.go:89] found id: "5925d6c423d79839f9eb8870977fb293e3c6b1ece77aa59bf7c2a4b120ca3ad3"
	I1003 18:30:10.974933  294022 cri.go:89] found id: "228036e3d30218b16026d557d3264fc361f0c7c42c143fc93a96fd7945d8bdf3"
	I1003 18:30:10.974943  294022 cri.go:89] found id: "d38c57e36e3594ef4f8f3d28db24890c659027ed75977701aa969ce142c27e0e"
	I1003 18:30:10.974946  294022 cri.go:89] found id: "8ab3974a2c302b83e53bc5a243fae87bdec8ed1ca2da979ebcc29dabb8f30fc4"
	I1003 18:30:10.974949  294022 cri.go:89] found id: "70497b5707570324a85bde79dadf41e8e6ded9bd45545ee1a7756ba32eed86d6"
	I1003 18:30:10.974952  294022 cri.go:89] found id: "26742750260bfb48e7909f410307ee53b3dafe6b84bb3a467c505e24d28d4fe1"
	I1003 18:30:10.974955  294022 cri.go:89] found id: "7099c81ca982b78bfa4dd5784e69f027f40fb02b99bce69ec1f792090be6a50b"
	I1003 18:30:10.974969  294022 cri.go:89] found id: "2657f869bb8529138f74b802beedcd922a626ac30c50e54c72731eaff1b930c0"
	I1003 18:30:10.974976  294022 cri.go:89] found id: "82907fef03cc43b849878194de7aef8c729ee89dcf5fddba29650a239ab81e90"
	I1003 18:30:10.974980  294022 cri.go:89] found id: "28257b7548dee5496025c494fc69f7d27b158c004459fe9cf7e145244cc402b4"
	I1003 18:30:10.974983  294022 cri.go:89] found id: "1a59139ec0face1693267071ca3c3ba3e8eff397418ffbf25f3682c68eee244a"
	I1003 18:30:10.974986  294022 cri.go:89] found id: "23bd53ece83d04d894e5fc60fda04a6f8bdfe8d6c59ffad6c4dcacc168ec4ed8"
	I1003 18:30:10.974991  294022 cri.go:89] found id: "1cbcaf90a28158f2a4d5495c4b92561650195912704daec05dcf1d9b56429e5c"
	I1003 18:30:10.974997  294022 cri.go:89] found id: "22981c6dff74a1d10571b76dae9b7bbbb33ca3843ab35927e1e5997100c5be1c"
	I1003 18:30:10.975000  294022 cri.go:89] found id: "e937e437e1e79c6bcbb92c82ee9849b6f8ceb2c5980d23b084e27a6fb88ab45a"
	I1003 18:30:10.975003  294022 cri.go:89] found id: ""
	I1003 18:30:10.975061  294022 ssh_runner.go:195] Run: sudo runc list -f json
	I1003 18:30:10.998317  294022 out.go:203] 
	W1003 18:30:11.001524  294022 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-03T18:30:10Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-03T18:30:10Z" level=error msg="open /run/runc: no such file or directory"
	
	W1003 18:30:11.001551  294022 out.go:285] * 
	* 
	W1003 18:30:11.008791  294022 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 18:30:11.011980  294022 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-arm64 -p addons-952140 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (16.77s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.51s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 4.929336ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-952140
addons_test.go:332: (dbg) Run:  kubectl --context addons-952140 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-952140 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-952140 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (262.750831ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 18:31:05.311257  295534 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:31:05.313134  295534 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:31:05.313181  295534 out.go:374] Setting ErrFile to fd 2...
	I1003 18:31:05.313200  295534 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:31:05.313516  295534 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-284583/.minikube/bin
	I1003 18:31:05.313880  295534 mustload.go:65] Loading cluster: addons-952140
	I1003 18:31:05.314731  295534 config.go:182] Loaded profile config "addons-952140": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:31:05.314755  295534 addons.go:606] checking whether the cluster is paused
	I1003 18:31:05.314902  295534 config.go:182] Loaded profile config "addons-952140": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:31:05.314938  295534 host.go:66] Checking if "addons-952140" exists ...
	I1003 18:31:05.315441  295534 cli_runner.go:164] Run: docker container inspect addons-952140 --format={{.State.Status}}
	I1003 18:31:05.332832  295534 ssh_runner.go:195] Run: systemctl --version
	I1003 18:31:05.332906  295534 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952140
	I1003 18:31:05.350940  295534 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/addons-952140/id_rsa Username:docker}
	I1003 18:31:05.447329  295534 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1003 18:31:05.447419  295534 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1003 18:31:05.480357  295534 cri.go:89] found id: "a2a54b8525b1b03c3294a286e260174ebeb999c736e2e29750632824346e2b8a"
	I1003 18:31:05.480381  295534 cri.go:89] found id: "764f61b1d1b52574dff121ce3057ed9a2791b059752cb80d76e6a5ae323e3765"
	I1003 18:31:05.480387  295534 cri.go:89] found id: "5520f176a27b0060104f01c653743a97419cd7df90b959dc02f4359563db372f"
	I1003 18:31:05.480391  295534 cri.go:89] found id: "a55dd027b4c2417a4e857716af2ec80adf3ee359efc1fcdea96ae017da8094db"
	I1003 18:31:05.480399  295534 cri.go:89] found id: "d11765424ad977c42ad7828e106df59281b6041a6b85d34d604738d051cc2257"
	I1003 18:31:05.480403  295534 cri.go:89] found id: "ba5695d849b4ff437b5c5a4c73351652ea5b855eb0061d3826ad4a2a76513650"
	I1003 18:31:05.480407  295534 cri.go:89] found id: "351cf9cd8e8f80a1ce058ad47867cc1e9e314f2100ba10ef01326c91fbea576c"
	I1003 18:31:05.480411  295534 cri.go:89] found id: "c2d0db82bc7f2bcfc4af04f3633a094c0e554392449fbf12a24ed377b92f941b"
	I1003 18:31:05.480414  295534 cri.go:89] found id: "5925d6c423d79839f9eb8870977fb293e3c6b1ece77aa59bf7c2a4b120ca3ad3"
	I1003 18:31:05.480420  295534 cri.go:89] found id: "228036e3d30218b16026d557d3264fc361f0c7c42c143fc93a96fd7945d8bdf3"
	I1003 18:31:05.480424  295534 cri.go:89] found id: "d38c57e36e3594ef4f8f3d28db24890c659027ed75977701aa969ce142c27e0e"
	I1003 18:31:05.480427  295534 cri.go:89] found id: "8ab3974a2c302b83e53bc5a243fae87bdec8ed1ca2da979ebcc29dabb8f30fc4"
	I1003 18:31:05.480430  295534 cri.go:89] found id: "70497b5707570324a85bde79dadf41e8e6ded9bd45545ee1a7756ba32eed86d6"
	I1003 18:31:05.480433  295534 cri.go:89] found id: "26742750260bfb48e7909f410307ee53b3dafe6b84bb3a467c505e24d28d4fe1"
	I1003 18:31:05.480437  295534 cri.go:89] found id: "7099c81ca982b78bfa4dd5784e69f027f40fb02b99bce69ec1f792090be6a50b"
	I1003 18:31:05.480442  295534 cri.go:89] found id: "2657f869bb8529138f74b802beedcd922a626ac30c50e54c72731eaff1b930c0"
	I1003 18:31:05.480446  295534 cri.go:89] found id: "82907fef03cc43b849878194de7aef8c729ee89dcf5fddba29650a239ab81e90"
	I1003 18:31:05.480451  295534 cri.go:89] found id: "28257b7548dee5496025c494fc69f7d27b158c004459fe9cf7e145244cc402b4"
	I1003 18:31:05.480455  295534 cri.go:89] found id: "1a59139ec0face1693267071ca3c3ba3e8eff397418ffbf25f3682c68eee244a"
	I1003 18:31:05.480457  295534 cri.go:89] found id: "23bd53ece83d04d894e5fc60fda04a6f8bdfe8d6c59ffad6c4dcacc168ec4ed8"
	I1003 18:31:05.480463  295534 cri.go:89] found id: "1cbcaf90a28158f2a4d5495c4b92561650195912704daec05dcf1d9b56429e5c"
	I1003 18:31:05.480466  295534 cri.go:89] found id: "22981c6dff74a1d10571b76dae9b7bbbb33ca3843ab35927e1e5997100c5be1c"
	I1003 18:31:05.480468  295534 cri.go:89] found id: "e937e437e1e79c6bcbb92c82ee9849b6f8ceb2c5980d23b084e27a6fb88ab45a"
	I1003 18:31:05.480471  295534 cri.go:89] found id: ""
	I1003 18:31:05.480532  295534 ssh_runner.go:195] Run: sudo runc list -f json
	I1003 18:31:05.495887  295534 out.go:203] 
	W1003 18:31:05.498983  295534 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-03T18:31:05Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-03T18:31:05Z" level=error msg="open /run/runc: no such file or directory"
	
	W1003 18:31:05.499009  295534 out.go:285] * 
	* 
	W1003 18:31:05.505357  295534 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 18:31:05.508247  295534 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-arm64 -p addons-952140 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.51s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (144.9s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-952140 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-952140 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-952140 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [9892a807-48f5-445d-813c-af35c3c33444] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [9892a807-48f5-445d-813c-af35c3c33444] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003127635s
I1003 18:30:31.335294  286434 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-952140 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-952140 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.495288889s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-952140 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-952140 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-952140
helpers_test.go:243: (dbg) docker inspect addons-952140:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "85b69962c0dc4c2d215c8870f97829566d7c577f428241564d0dd056e84304a6",
	        "Created": "2025-10-03T18:27:19.855189615Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 287583,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-03T18:27:19.913676844Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/85b69962c0dc4c2d215c8870f97829566d7c577f428241564d0dd056e84304a6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/85b69962c0dc4c2d215c8870f97829566d7c577f428241564d0dd056e84304a6/hostname",
	        "HostsPath": "/var/lib/docker/containers/85b69962c0dc4c2d215c8870f97829566d7c577f428241564d0dd056e84304a6/hosts",
	        "LogPath": "/var/lib/docker/containers/85b69962c0dc4c2d215c8870f97829566d7c577f428241564d0dd056e84304a6/85b69962c0dc4c2d215c8870f97829566d7c577f428241564d0dd056e84304a6-json.log",
	        "Name": "/addons-952140",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-952140:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-952140",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "85b69962c0dc4c2d215c8870f97829566d7c577f428241564d0dd056e84304a6",
	                "LowerDir": "/var/lib/docker/overlay2/af2feed79df5584ff68bcd67773f16b1405a7fad3408cae5965483c88a8058de-init/diff:/var/lib/docker/overlay2/87b205803817b0b71a214d995ab7e10a92033bbf72d76d6e052f1d21ccecb313/diff",
	                "MergedDir": "/var/lib/docker/overlay2/af2feed79df5584ff68bcd67773f16b1405a7fad3408cae5965483c88a8058de/merged",
	                "UpperDir": "/var/lib/docker/overlay2/af2feed79df5584ff68bcd67773f16b1405a7fad3408cae5965483c88a8058de/diff",
	                "WorkDir": "/var/lib/docker/overlay2/af2feed79df5584ff68bcd67773f16b1405a7fad3408cae5965483c88a8058de/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-952140",
	                "Source": "/var/lib/docker/volumes/addons-952140/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-952140",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-952140",
	                "name.minikube.sigs.k8s.io": "addons-952140",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "08b67d1352c657f54ec558bf835b927545829aa9a1fb88449a14ba61bd7df350",
	            "SandboxKey": "/var/run/docker/netns/08b67d1352c6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-952140": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8e:55:f8:24:f6:5c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e102fc4e4e8b2c7a717e7cf7e622833192d1b0f46c494a0da77e2c59f148cd18",
	                    "EndpointID": "5396c29e50147b89cfaea761a6acbfd0662c08046a695d7b749d5f110fd8d0fa",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-952140",
	                        "85b69962c0dc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-952140 -n addons-952140
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-952140 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-952140 logs -n 25: (1.89294958s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-docker-526019                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-526019 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ start   │ --download-only -p binary-mirror-482654 --alsologtostderr --binary-mirror http://127.0.0.1:38575 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-482654   │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ delete  │ -p binary-mirror-482654                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-482654   │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ addons  │ enable dashboard -p addons-952140                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-952140          │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ addons  │ disable dashboard -p addons-952140                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-952140          │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ start   │ -p addons-952140 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-952140          │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:29 UTC │
	│ addons  │ addons-952140 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-952140          │ jenkins │ v1.37.0 │ 03 Oct 25 18:29 UTC │                     │
	│ addons  │ addons-952140 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-952140          │ jenkins │ v1.37.0 │ 03 Oct 25 18:29 UTC │                     │
	│ addons  │ enable headlamp -p addons-952140 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-952140          │ jenkins │ v1.37.0 │ 03 Oct 25 18:29 UTC │                     │
	│ addons  │ addons-952140 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-952140          │ jenkins │ v1.37.0 │ 03 Oct 25 18:29 UTC │                     │
	│ addons  │ addons-952140 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-952140          │ jenkins │ v1.37.0 │ 03 Oct 25 18:30 UTC │                     │
	│ addons  │ addons-952140 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-952140          │ jenkins │ v1.37.0 │ 03 Oct 25 18:30 UTC │                     │
	│ ip      │ addons-952140 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-952140          │ jenkins │ v1.37.0 │ 03 Oct 25 18:30 UTC │ 03 Oct 25 18:30 UTC │
	│ addons  │ addons-952140 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-952140          │ jenkins │ v1.37.0 │ 03 Oct 25 18:30 UTC │                     │
	│ addons  │ addons-952140 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-952140          │ jenkins │ v1.37.0 │ 03 Oct 25 18:30 UTC │                     │
	│ ssh     │ addons-952140 ssh cat /opt/local-path-provisioner/pvc-a5fb303d-41e5-4aba-bf5e-80bf1ea770ef_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-952140          │ jenkins │ v1.37.0 │ 03 Oct 25 18:30 UTC │ 03 Oct 25 18:30 UTC │
	│ addons  │ addons-952140 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-952140          │ jenkins │ v1.37.0 │ 03 Oct 25 18:30 UTC │                     │
	│ addons  │ addons-952140 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-952140          │ jenkins │ v1.37.0 │ 03 Oct 25 18:30 UTC │                     │
	│ ssh     │ addons-952140 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-952140          │ jenkins │ v1.37.0 │ 03 Oct 25 18:30 UTC │                     │
	│ addons  │ addons-952140 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-952140          │ jenkins │ v1.37.0 │ 03 Oct 25 18:30 UTC │                     │
	│ addons  │ addons-952140 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-952140          │ jenkins │ v1.37.0 │ 03 Oct 25 18:30 UTC │                     │
	│ addons  │ addons-952140 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-952140          │ jenkins │ v1.37.0 │ 03 Oct 25 18:31 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-952140                                                                                                                                                                                                                                                                                                                                                                                           │ addons-952140          │ jenkins │ v1.37.0 │ 03 Oct 25 18:31 UTC │ 03 Oct 25 18:31 UTC │
	│ addons  │ addons-952140 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-952140          │ jenkins │ v1.37.0 │ 03 Oct 25 18:31 UTC │                     │
	│ ip      │ addons-952140 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-952140          │ jenkins │ v1.37.0 │ 03 Oct 25 18:32 UTC │ 03 Oct 25 18:32 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/03 18:26:53
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 18:26:53.258370  287189 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:26:53.258537  287189 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:26:53.258566  287189 out.go:374] Setting ErrFile to fd 2...
	I1003 18:26:53.258587  287189 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:26:53.258967  287189 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-284583/.minikube/bin
	I1003 18:26:53.259863  287189 out.go:368] Setting JSON to false
	I1003 18:26:53.260750  287189 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":4165,"bootTime":1759511849,"procs":149,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1003 18:26:53.260817  287189 start.go:140] virtualization:  
	I1003 18:26:53.264033  287189 out.go:179] * [addons-952140] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1003 18:26:53.267739  287189 out.go:179]   - MINIKUBE_LOCATION=21625
	I1003 18:26:53.267804  287189 notify.go:220] Checking for updates...
	I1003 18:26:53.273340  287189 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 18:26:53.276093  287189 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21625-284583/kubeconfig
	I1003 18:26:53.278928  287189 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-284583/.minikube
	I1003 18:26:53.281777  287189 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1003 18:26:53.284697  287189 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 18:26:53.287745  287189 driver.go:421] Setting default libvirt URI to qemu:///system
	I1003 18:26:53.307839  287189 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1003 18:26:53.307970  287189 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:26:53.373367  287189 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:49 SystemTime:2025-10-03 18:26:53.364113847 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1003 18:26:53.373470  287189 docker.go:318] overlay module found
	I1003 18:26:53.378356  287189 out.go:179] * Using the docker driver based on user configuration
	I1003 18:26:53.381255  287189 start.go:304] selected driver: docker
	I1003 18:26:53.381280  287189 start.go:924] validating driver "docker" against <nil>
	I1003 18:26:53.381294  287189 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 18:26:53.382012  287189 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:26:53.434317  287189 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:49 SystemTime:2025-10-03 18:26:53.424871215 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1003 18:26:53.434483  287189 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1003 18:26:53.434718  287189 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 18:26:53.437564  287189 out.go:179] * Using Docker driver with root privileges
	I1003 18:26:53.440295  287189 cni.go:84] Creating CNI manager for ""
	I1003 18:26:53.440365  287189 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 18:26:53.440378  287189 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1003 18:26:53.440468  287189 start.go:348] cluster config:
	{Name:addons-952140 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-952140 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1003 18:26:53.443515  287189 out.go:179] * Starting "addons-952140" primary control-plane node in "addons-952140" cluster
	I1003 18:26:53.446262  287189 cache.go:123] Beginning downloading kic base image for docker with crio
	I1003 18:26:53.449094  287189 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1003 18:26:53.451835  287189 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 18:26:53.451864  287189 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1003 18:26:53.451885  287189 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21625-284583/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1003 18:26:53.451903  287189 cache.go:58] Caching tarball of preloaded images
	I1003 18:26:53.451981  287189 preload.go:233] Found /home/jenkins/minikube-integration/21625-284583/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1003 18:26:53.451991  287189 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1003 18:26:53.452349  287189 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/config.json ...
	I1003 18:26:53.452371  287189 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/config.json: {Name:mk3ec801b1a665b1e71f8e04e2ef22390583bd1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:26:53.468260  287189 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d to local cache
	I1003 18:26:53.468397  287189 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory
	I1003 18:26:53.468433  287189 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory, skipping pull
	I1003 18:26:53.468443  287189 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in cache, skipping pull
	I1003 18:26:53.468450  287189 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d as a tarball
	I1003 18:26:53.468460  287189 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d from local cache
	I1003 18:27:11.531443  287189 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d from cached tarball
	I1003 18:27:11.531482  287189 cache.go:232] Successfully downloaded all kic artifacts
	I1003 18:27:11.531512  287189 start.go:360] acquireMachinesLock for addons-952140: {Name:mkd6a11acda609d82d4d50b6e8e52d51cc676e0e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 18:27:11.531635  287189 start.go:364] duration metric: took 97.671µs to acquireMachinesLock for "addons-952140"
	I1003 18:27:11.531667  287189 start.go:93] Provisioning new machine with config: &{Name:addons-952140 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-952140 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1003 18:27:11.531753  287189 start.go:125] createHost starting for "" (driver="docker")
	I1003 18:27:11.535205  287189 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1003 18:27:11.535450  287189 start.go:159] libmachine.API.Create for "addons-952140" (driver="docker")
	I1003 18:27:11.535499  287189 client.go:168] LocalClient.Create starting
	I1003 18:27:11.535623  287189 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem
	I1003 18:27:12.987225  287189 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem
	I1003 18:27:13.052378  287189 cli_runner.go:164] Run: docker network inspect addons-952140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1003 18:27:13.068576  287189 cli_runner.go:211] docker network inspect addons-952140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1003 18:27:13.068677  287189 network_create.go:284] running [docker network inspect addons-952140] to gather additional debugging logs...
	I1003 18:27:13.068700  287189 cli_runner.go:164] Run: docker network inspect addons-952140
	W1003 18:27:13.084782  287189 cli_runner.go:211] docker network inspect addons-952140 returned with exit code 1
	I1003 18:27:13.084831  287189 network_create.go:287] error running [docker network inspect addons-952140]: docker network inspect addons-952140: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-952140 not found
	I1003 18:27:13.084846  287189 network_create.go:289] output of [docker network inspect addons-952140]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-952140 not found
	
	** /stderr **
	I1003 18:27:13.084967  287189 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 18:27:13.102476  287189 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001d260a0}
	I1003 18:27:13.102515  287189 network_create.go:124] attempt to create docker network addons-952140 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1003 18:27:13.102573  287189 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-952140 addons-952140
	I1003 18:27:13.159876  287189 network_create.go:108] docker network addons-952140 192.168.49.0/24 created
	I1003 18:27:13.159910  287189 kic.go:121] calculated static IP "192.168.49.2" for the "addons-952140" container
	I1003 18:27:13.159994  287189 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1003 18:27:13.175310  287189 cli_runner.go:164] Run: docker volume create addons-952140 --label name.minikube.sigs.k8s.io=addons-952140 --label created_by.minikube.sigs.k8s.io=true
	I1003 18:27:13.196077  287189 oci.go:103] Successfully created a docker volume addons-952140
	I1003 18:27:13.196196  287189 cli_runner.go:164] Run: docker run --rm --name addons-952140-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-952140 --entrypoint /usr/bin/test -v addons-952140:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1003 18:27:15.341088  287189 cli_runner.go:217] Completed: docker run --rm --name addons-952140-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-952140 --entrypoint /usr/bin/test -v addons-952140:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib: (2.144842495s)
	I1003 18:27:15.341121  287189 oci.go:107] Successfully prepared a docker volume addons-952140
	I1003 18:27:15.341155  287189 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 18:27:15.341176  287189 kic.go:194] Starting extracting preloaded images to volume ...
	I1003 18:27:15.341250  287189 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21625-284583/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-952140:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1003 18:27:19.781444  287189 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21625-284583/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-952140:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.440151134s)
	I1003 18:27:19.781476  287189 kic.go:203] duration metric: took 4.440297572s to extract preloaded images to volume ...
	W1003 18:27:19.781627  287189 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1003 18:27:19.781738  287189 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1003 18:27:19.840410  287189 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-952140 --name addons-952140 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-952140 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-952140 --network addons-952140 --ip 192.168.49.2 --volume addons-952140:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1003 18:27:20.168240  287189 cli_runner.go:164] Run: docker container inspect addons-952140 --format={{.State.Running}}
	I1003 18:27:20.187708  287189 cli_runner.go:164] Run: docker container inspect addons-952140 --format={{.State.Status}}
	I1003 18:27:20.215470  287189 cli_runner.go:164] Run: docker exec addons-952140 stat /var/lib/dpkg/alternatives/iptables
	I1003 18:27:20.269170  287189 oci.go:144] the created container "addons-952140" has a running status.
	I1003 18:27:20.269198  287189 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21625-284583/.minikube/machines/addons-952140/id_rsa...
	I1003 18:27:20.371880  287189 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21625-284583/.minikube/machines/addons-952140/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1003 18:27:20.392209  287189 cli_runner.go:164] Run: docker container inspect addons-952140 --format={{.State.Status}}
	I1003 18:27:20.409313  287189 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1003 18:27:20.409335  287189 kic_runner.go:114] Args: [docker exec --privileged addons-952140 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1003 18:27:20.463222  287189 cli_runner.go:164] Run: docker container inspect addons-952140 --format={{.State.Status}}
	I1003 18:27:20.501808  287189 machine.go:93] provisionDockerMachine start ...
	I1003 18:27:20.501919  287189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952140
	I1003 18:27:20.523050  287189 main.go:141] libmachine: Using SSH client type: native
	I1003 18:27:20.523428  287189 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1003 18:27:20.523445  287189 main.go:141] libmachine: About to run SSH command:
	hostname
	I1003 18:27:20.524034  287189 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1003 18:27:23.660422  287189 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-952140
	
	I1003 18:27:23.660447  287189 ubuntu.go:182] provisioning hostname "addons-952140"
	I1003 18:27:23.660515  287189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952140
	I1003 18:27:23.678923  287189 main.go:141] libmachine: Using SSH client type: native
	I1003 18:27:23.679241  287189 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1003 18:27:23.679260  287189 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-952140 && echo "addons-952140" | sudo tee /etc/hostname
	I1003 18:27:23.817967  287189 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-952140
	
	I1003 18:27:23.818049  287189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952140
	I1003 18:27:23.835450  287189 main.go:141] libmachine: Using SSH client type: native
	I1003 18:27:23.835760  287189 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1003 18:27:23.835781  287189 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-952140' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-952140/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-952140' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 18:27:23.965132  287189 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 18:27:23.965205  287189 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21625-284583/.minikube CaCertPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21625-284583/.minikube}
	I1003 18:27:23.965231  287189 ubuntu.go:190] setting up certificates
	I1003 18:27:23.965240  287189 provision.go:84] configureAuth start
	I1003 18:27:23.965309  287189 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-952140
	I1003 18:27:23.997785  287189 provision.go:143] copyHostCerts
	I1003 18:27:23.997878  287189 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21625-284583/.minikube/key.pem (1675 bytes)
	I1003 18:27:23.998039  287189 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21625-284583/.minikube/ca.pem (1082 bytes)
	I1003 18:27:23.998110  287189 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21625-284583/.minikube/cert.pem (1123 bytes)
	I1003 18:27:23.998164  287189 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21625-284583/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca-key.pem org=jenkins.addons-952140 san=[127.0.0.1 192.168.49.2 addons-952140 localhost minikube]
	I1003 18:27:24.847072  287189 provision.go:177] copyRemoteCerts
	I1003 18:27:24.847166  287189 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 18:27:24.847217  287189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952140
	I1003 18:27:24.863338  287189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/addons-952140/id_rsa Username:docker}
	I1003 18:27:24.955970  287189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 18:27:24.973258  287189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1003 18:27:24.992658  287189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1003 18:27:25.011090  287189 provision.go:87] duration metric: took 1.045836815s to configureAuth
	I1003 18:27:25.011160  287189 ubuntu.go:206] setting minikube options for container-runtime
	I1003 18:27:25.011378  287189 config.go:182] Loaded profile config "addons-952140": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:27:25.011522  287189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952140
	I1003 18:27:25.028437  287189 main.go:141] libmachine: Using SSH client type: native
	I1003 18:27:25.028815  287189 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1003 18:27:25.028840  287189 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1003 18:27:25.266764  287189 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1003 18:27:25.266790  287189 machine.go:96] duration metric: took 4.76496131s to provisionDockerMachine
	I1003 18:27:25.266800  287189 client.go:171] duration metric: took 13.731291012s to LocalClient.Create
	I1003 18:27:25.266813  287189 start.go:167] duration metric: took 13.731365388s to libmachine.API.Create "addons-952140"
	I1003 18:27:25.266819  287189 start.go:293] postStartSetup for "addons-952140" (driver="docker")
	I1003 18:27:25.266829  287189 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 18:27:25.266896  287189 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 18:27:25.266942  287189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952140
	I1003 18:27:25.284199  287189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/addons-952140/id_rsa Username:docker}
	I1003 18:27:25.380680  287189 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 18:27:25.383911  287189 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1003 18:27:25.383939  287189 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1003 18:27:25.383949  287189 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-284583/.minikube/addons for local assets ...
	I1003 18:27:25.384016  287189 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-284583/.minikube/files for local assets ...
	I1003 18:27:25.384044  287189 start.go:296] duration metric: took 117.219209ms for postStartSetup
	I1003 18:27:25.384362  287189 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-952140
	I1003 18:27:25.400538  287189 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/config.json ...
	I1003 18:27:25.400955  287189 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 18:27:25.401020  287189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952140
	I1003 18:27:25.417502  287189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/addons-952140/id_rsa Username:docker}
	I1003 18:27:25.509553  287189 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1003 18:27:25.514278  287189 start.go:128] duration metric: took 13.982508395s to createHost
	I1003 18:27:25.514348  287189 start.go:83] releasing machines lock for "addons-952140", held for 13.982697091s
	I1003 18:27:25.514453  287189 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-952140
	I1003 18:27:25.533854  287189 ssh_runner.go:195] Run: cat /version.json
	I1003 18:27:25.533913  287189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952140
	I1003 18:27:25.534165  287189 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 18:27:25.534232  287189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952140
	I1003 18:27:25.552216  287189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/addons-952140/id_rsa Username:docker}
	I1003 18:27:25.554662  287189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/addons-952140/id_rsa Username:docker}
	I1003 18:27:25.739493  287189 ssh_runner.go:195] Run: systemctl --version
	I1003 18:27:25.746062  287189 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1003 18:27:25.782177  287189 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1003 18:27:25.786238  287189 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 18:27:25.786305  287189 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 18:27:25.814818  287189 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1003 18:27:25.814857  287189 start.go:495] detecting cgroup driver to use...
	I1003 18:27:25.814893  287189 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1003 18:27:25.814959  287189 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 18:27:25.832053  287189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 18:27:25.844195  287189 docker.go:218] disabling cri-docker service (if available) ...
	I1003 18:27:25.844258  287189 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1003 18:27:25.862182  287189 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1003 18:27:25.880466  287189 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1003 18:27:25.994085  287189 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1003 18:27:26.114139  287189 docker.go:234] disabling docker service ...
	I1003 18:27:26.114226  287189 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1003 18:27:26.137484  287189 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1003 18:27:26.150396  287189 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1003 18:27:26.256024  287189 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1003 18:27:26.372857  287189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1003 18:27:26.385960  287189 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 18:27:26.399988  287189 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1003 18:27:26.400053  287189 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:27:26.408800  287189 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1003 18:27:26.408934  287189 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:27:26.418191  287189 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:27:26.426697  287189 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:27:26.435134  287189 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 18:27:26.443038  287189 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:27:26.451740  287189 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:27:26.464608  287189 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:27:26.473357  287189 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 18:27:26.480576  287189 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 18:27:26.488014  287189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:27:26.597698  287189 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1003 18:27:26.725699  287189 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1003 18:27:26.725788  287189 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1003 18:27:26.729960  287189 start.go:563] Will wait 60s for crictl version
	I1003 18:27:26.730024  287189 ssh_runner.go:195] Run: which crictl
	I1003 18:27:26.733487  287189 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1003 18:27:26.763051  287189 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1003 18:27:26.763156  287189 ssh_runner.go:195] Run: crio --version
	I1003 18:27:26.791411  287189 ssh_runner.go:195] Run: crio --version
	I1003 18:27:26.823819  287189 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1003 18:27:26.826690  287189 cli_runner.go:164] Run: docker network inspect addons-952140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 18:27:26.842456  287189 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1003 18:27:26.846343  287189 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 18:27:26.856409  287189 kubeadm.go:883] updating cluster {Name:addons-952140 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-952140 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1003 18:27:26.856530  287189 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 18:27:26.856597  287189 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 18:27:26.890649  287189 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 18:27:26.890674  287189 crio.go:433] Images already preloaded, skipping extraction
	I1003 18:27:26.890737  287189 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 18:27:26.918855  287189 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 18:27:26.918880  287189 cache_images.go:85] Images are preloaded, skipping loading
	I1003 18:27:26.918889  287189 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1003 18:27:26.918977  287189 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-952140 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-952140 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1003 18:27:26.919068  287189 ssh_runner.go:195] Run: crio config
	I1003 18:27:26.974117  287189 cni.go:84] Creating CNI manager for ""
	I1003 18:27:26.974143  287189 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 18:27:26.974164  287189 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1003 18:27:26.974187  287189 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-952140 NodeName:addons-952140 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1003 18:27:26.974330  287189 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-952140"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 18:27:26.974414  287189 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1003 18:27:26.984460  287189 binaries.go:44] Found k8s binaries, skipping transfer
	I1003 18:27:26.984544  287189 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1003 18:27:26.993387  287189 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1003 18:27:27.007458  287189 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 18:27:27.020629  287189 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1003 18:27:27.033663  287189 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1003 18:27:27.037228  287189 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 18:27:27.046757  287189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:27:27.154280  287189 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 18:27:27.169428  287189 certs.go:69] Setting up /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140 for IP: 192.168.49.2
	I1003 18:27:27.169466  287189 certs.go:195] generating shared ca certs ...
	I1003 18:27:27.169483  287189 certs.go:227] acquiring lock for ca certs: {Name:mk5a10e6c921326e9c211447576eaeb893259ba7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:27:27.169741  287189 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21625-284583/.minikube/ca.key
	I1003 18:27:28.051313  287189 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-284583/.minikube/ca.crt ...
	I1003 18:27:28.051369  287189 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/ca.crt: {Name:mk4762d571a7a8484888e142e032b018ed06ae45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:27:28.051576  287189 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-284583/.minikube/ca.key ...
	I1003 18:27:28.051590  287189 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/ca.key: {Name:mk3482c30285b4babfb26eaf5951feb9c1fe2920 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:27:28.051689  287189 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.key
	I1003 18:27:28.237762  287189 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.crt ...
	I1003 18:27:28.237792  287189 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.crt: {Name:mkafbd54c049b3bb6f950505f085641692ae365d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:27:28.237966  287189 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.key ...
	I1003 18:27:28.237979  287189 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.key: {Name:mkb1422c38587215187c66c3c57c750e98643381 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:27:28.238671  287189 certs.go:257] generating profile certs ...
	I1003 18:27:28.238742  287189 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/client.key
	I1003 18:27:28.238760  287189 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/client.crt with IP's: []
	I1003 18:27:28.489726  287189 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/client.crt ...
	I1003 18:27:28.489758  287189 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/client.crt: {Name:mk96a252ffc9b3e664309d46953d957d82a24126 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:27:28.489966  287189 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/client.key ...
	I1003 18:27:28.489984  287189 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/client.key: {Name:mk4a30a979ca12ac9d25eeaf2eb1b582a8e60aa8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:27:28.490077  287189 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/apiserver.key.f1fb8b4f
	I1003 18:27:28.490099  287189 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/apiserver.crt.f1fb8b4f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1003 18:27:28.765602  287189 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/apiserver.crt.f1fb8b4f ...
	I1003 18:27:28.765635  287189 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/apiserver.crt.f1fb8b4f: {Name:mk49522fcb61d177cb35d2e803b82ca25f278e14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:27:28.765813  287189 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/apiserver.key.f1fb8b4f ...
	I1003 18:27:28.765827  287189 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/apiserver.key.f1fb8b4f: {Name:mk77ffe454005fa8c41ea69a48307a698967e656 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:27:28.765927  287189 certs.go:382] copying /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/apiserver.crt.f1fb8b4f -> /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/apiserver.crt
	I1003 18:27:28.766021  287189 certs.go:386] copying /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/apiserver.key.f1fb8b4f -> /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/apiserver.key
	I1003 18:27:28.766080  287189 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/proxy-client.key
	I1003 18:27:28.766102  287189 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/proxy-client.crt with IP's: []
	I1003 18:27:29.595531  287189 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/proxy-client.crt ...
	I1003 18:27:29.595564  287189 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/proxy-client.crt: {Name:mk3be2dd7ccf9597721db3ea56ebb44245648c26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:27:29.595750  287189 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/proxy-client.key ...
	I1003 18:27:29.595765  287189 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/proxy-client.key: {Name:mk5144ea8327b4cdfd47e82293649e6d7693a18c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:27:29.595951  287189 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 18:27:29.595991  287189 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem (1082 bytes)
	I1003 18:27:29.596021  287189 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem (1123 bytes)
	I1003 18:27:29.596048  287189 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/key.pem (1675 bytes)
	I1003 18:27:29.596599  287189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 18:27:29.615451  287189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1003 18:27:29.632668  287189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 18:27:29.650366  287189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1003 18:27:29.668879  287189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1003 18:27:29.685560  287189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1003 18:27:29.702921  287189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 18:27:29.720259  287189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1003 18:27:29.737586  287189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 18:27:29.754971  287189 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 18:27:29.767293  287189 ssh_runner.go:195] Run: openssl version
	I1003 18:27:29.773509  287189 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 18:27:29.781558  287189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:27:29.785074  287189 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  3 18:27 /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:27:29.785131  287189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:27:29.825944  287189 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 18:27:29.834046  287189 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 18:27:29.837521  287189 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1003 18:27:29.837623  287189 kubeadm.go:400] StartCluster: {Name:addons-952140 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-952140 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 18:27:29.837708  287189 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1003 18:27:29.837767  287189 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1003 18:27:29.864321  287189 cri.go:89] found id: ""
	I1003 18:27:29.864474  287189 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 18:27:29.872083  287189 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1003 18:27:29.879489  287189 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1003 18:27:29.879553  287189 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 18:27:29.886839  287189 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1003 18:27:29.886901  287189 kubeadm.go:157] found existing configuration files:
	
	I1003 18:27:29.886976  287189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1003 18:27:29.894441  287189 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1003 18:27:29.894505  287189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1003 18:27:29.901848  287189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1003 18:27:29.909210  287189 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1003 18:27:29.909288  287189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1003 18:27:29.916969  287189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1003 18:27:29.924904  287189 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1003 18:27:29.924990  287189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1003 18:27:29.932084  287189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1003 18:27:29.939726  287189 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1003 18:27:29.939850  287189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1003 18:27:29.947362  287189 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1003 18:27:29.997944  287189 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1003 18:27:29.998011  287189 kubeadm.go:318] [preflight] Running pre-flight checks
	I1003 18:27:30.074042  287189 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1003 18:27:30.074152  287189 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1003 18:27:30.074211  287189 kubeadm.go:318] OS: Linux
	I1003 18:27:30.074283  287189 kubeadm.go:318] CGROUPS_CPU: enabled
	I1003 18:27:30.074360  287189 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1003 18:27:30.074435  287189 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1003 18:27:30.074504  287189 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1003 18:27:30.074575  287189 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1003 18:27:30.074644  287189 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1003 18:27:30.074708  287189 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1003 18:27:30.074778  287189 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1003 18:27:30.074849  287189 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1003 18:27:30.158604  287189 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1003 18:27:30.158768  287189 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1003 18:27:30.158892  287189 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1003 18:27:30.169253  287189 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1003 18:27:30.173502  287189 out.go:252]   - Generating certificates and keys ...
	I1003 18:27:30.173611  287189 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1003 18:27:30.173709  287189 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1003 18:27:30.469446  287189 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1003 18:27:32.754604  287189 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1003 18:27:33.102636  287189 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1003 18:27:33.370237  287189 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1003 18:27:33.596294  287189 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1003 18:27:33.596672  287189 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-952140 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1003 18:27:33.980605  287189 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1003 18:27:33.981071  287189 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-952140 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1003 18:27:34.460225  287189 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1003 18:27:34.845280  287189 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1003 18:27:34.949650  287189 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1003 18:27:34.949975  287189 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1003 18:27:35.388926  287189 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1003 18:27:35.852554  287189 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1003 18:27:37.179116  287189 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1003 18:27:37.576875  287189 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1003 18:27:39.110661  287189 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1003 18:27:39.111595  287189 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1003 18:27:39.114447  287189 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1003 18:27:39.117721  287189 out.go:252]   - Booting up control plane ...
	I1003 18:27:39.117830  287189 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1003 18:27:39.125059  287189 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1003 18:27:39.126398  287189 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1003 18:27:39.148112  287189 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1003 18:27:39.148239  287189 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1003 18:27:39.155771  287189 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1003 18:27:39.156346  287189 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1003 18:27:39.156400  287189 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1003 18:27:39.285437  287189 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1003 18:27:39.285566  287189 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1003 18:27:39.797134  287189 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 513.549621ms
	I1003 18:27:39.797514  287189 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1003 18:27:39.797818  287189 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1003 18:27:39.798116  287189 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1003 18:27:39.798402  287189 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1003 18:27:41.975694  287189 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.176821071s
	I1003 18:27:44.060375  287189 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.261562115s
	I1003 18:27:45.800073  287189 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.00179073s
	I1003 18:27:45.831091  287189 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1003 18:27:45.849703  287189 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1003 18:27:45.865907  287189 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1003 18:27:45.866361  287189 kubeadm.go:318] [mark-control-plane] Marking the node addons-952140 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1003 18:27:45.879548  287189 kubeadm.go:318] [bootstrap-token] Using token: fbxqq7.5pacsqus63pybu4q
	I1003 18:27:45.882701  287189 out.go:252]   - Configuring RBAC rules ...
	I1003 18:27:45.882823  287189 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1003 18:27:45.900037  287189 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1003 18:27:45.910033  287189 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1003 18:27:45.919386  287189 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1003 18:27:45.932675  287189 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1003 18:27:45.957218  287189 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1003 18:27:46.208200  287189 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1003 18:27:46.652025  287189 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1003 18:27:47.209933  287189 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1003 18:27:47.211586  287189 kubeadm.go:318] 
	I1003 18:27:47.211662  287189 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1003 18:27:47.211670  287189 kubeadm.go:318] 
	I1003 18:27:47.211750  287189 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1003 18:27:47.211755  287189 kubeadm.go:318] 
	I1003 18:27:47.211781  287189 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1003 18:27:47.211843  287189 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1003 18:27:47.211902  287189 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1003 18:27:47.211915  287189 kubeadm.go:318] 
	I1003 18:27:47.211972  287189 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1003 18:27:47.211977  287189 kubeadm.go:318] 
	I1003 18:27:47.212027  287189 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1003 18:27:47.212031  287189 kubeadm.go:318] 
	I1003 18:27:47.212086  287189 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1003 18:27:47.212164  287189 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1003 18:27:47.212236  287189 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1003 18:27:47.212240  287189 kubeadm.go:318] 
	I1003 18:27:47.212328  287189 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1003 18:27:47.212409  287189 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1003 18:27:47.212414  287189 kubeadm.go:318] 
	I1003 18:27:47.212501  287189 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token fbxqq7.5pacsqus63pybu4q \
	I1003 18:27:47.212608  287189 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:f66ff31263aa4cda6b17caa2076838d6a1918275f1c2773b90b119c0d4a4d71a \
	I1003 18:27:47.212630  287189 kubeadm.go:318] 	--control-plane 
	I1003 18:27:47.212634  287189 kubeadm.go:318] 
	I1003 18:27:47.212735  287189 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1003 18:27:47.212741  287189 kubeadm.go:318] 
	I1003 18:27:47.212826  287189 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token fbxqq7.5pacsqus63pybu4q \
	I1003 18:27:47.212938  287189 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:f66ff31263aa4cda6b17caa2076838d6a1918275f1c2773b90b119c0d4a4d71a 
	I1003 18:27:47.216798  287189 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1003 18:27:47.217138  287189 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1003 18:27:47.217272  287189 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1003 18:27:47.217293  287189 cni.go:84] Creating CNI manager for ""
	I1003 18:27:47.217302  287189 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 18:27:47.220479  287189 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1003 18:27:47.223491  287189 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1003 18:27:47.227899  287189 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1003 18:27:47.227923  287189 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1003 18:27:47.242859  287189 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1003 18:27:47.530445  287189 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1003 18:27:47.530545  287189 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 18:27:47.530592  287189 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-952140 minikube.k8s.io/updated_at=2025_10_03T18_27_47_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=a43873c79fc22f8b1ccd29d3dfa635d392b09335 minikube.k8s.io/name=addons-952140 minikube.k8s.io/primary=true
	I1003 18:27:47.742326  287189 ops.go:34] apiserver oom_adj: -16
	I1003 18:27:47.742444  287189 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 18:27:48.242619  287189 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 18:27:48.742594  287189 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 18:27:49.242563  287189 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 18:27:49.743505  287189 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 18:27:50.242824  287189 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 18:27:50.743079  287189 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 18:27:51.242841  287189 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 18:27:51.742776  287189 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 18:27:51.884205  287189 kubeadm.go:1113] duration metric: took 4.353727174s to wait for elevateKubeSystemPrivileges
	I1003 18:27:51.884232  287189 kubeadm.go:402] duration metric: took 22.046612743s to StartCluster
	I1003 18:27:51.884248  287189 settings.go:142] acquiring lock: {Name:mkc95577dbc448e3409dfa2b5e53a3a1327cb451 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:27:51.884358  287189 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21625-284583/kubeconfig
	I1003 18:27:51.884806  287189 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/kubeconfig: {Name:mkc1323fd87f4a78231a26d2dab0dff7feecf1e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:27:51.885658  287189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1003 18:27:51.885955  287189 config.go:182] Loaded profile config "addons-952140": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:27:51.885770  287189 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1003 18:27:51.886049  287189 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1003 18:27:51.886131  287189 addons.go:69] Setting yakd=true in profile "addons-952140"
	I1003 18:27:51.886148  287189 addons.go:238] Setting addon yakd=true in "addons-952140"
	I1003 18:27:51.886169  287189 host.go:66] Checking if "addons-952140" exists ...
	I1003 18:27:51.886625  287189 cli_runner.go:164] Run: docker container inspect addons-952140 --format={{.State.Status}}
	I1003 18:27:51.887143  287189 addons.go:69] Setting metrics-server=true in profile "addons-952140"
	I1003 18:27:51.887178  287189 addons.go:238] Setting addon metrics-server=true in "addons-952140"
	I1003 18:27:51.887210  287189 host.go:66] Checking if "addons-952140" exists ...
	I1003 18:27:51.887613  287189 cli_runner.go:164] Run: docker container inspect addons-952140 --format={{.State.Status}}
	I1003 18:27:51.888255  287189 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-952140"
	I1003 18:27:51.891424  287189 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-952140"
	I1003 18:27:51.891508  287189 host.go:66] Checking if "addons-952140" exists ...
	I1003 18:27:51.891993  287189 cli_runner.go:164] Run: docker container inspect addons-952140 --format={{.State.Status}}
	I1003 18:27:51.894863  287189 out.go:179] * Verifying Kubernetes components...
	I1003 18:27:51.890113  287189 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-952140"
	I1003 18:27:51.898013  287189 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-952140"
	I1003 18:27:51.902820  287189 host.go:66] Checking if "addons-952140" exists ...
	I1003 18:27:51.903311  287189 cli_runner.go:164] Run: docker container inspect addons-952140 --format={{.State.Status}}
	I1003 18:27:51.890122  287189 addons.go:69] Setting registry=true in profile "addons-952140"
	I1003 18:27:51.903723  287189 addons.go:238] Setting addon registry=true in "addons-952140"
	I1003 18:27:51.903751  287189 host.go:66] Checking if "addons-952140" exists ...
	I1003 18:27:51.904153  287189 cli_runner.go:164] Run: docker container inspect addons-952140 --format={{.State.Status}}
	I1003 18:27:51.908395  287189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:27:51.890136  287189 addons.go:69] Setting registry-creds=true in profile "addons-952140"
	I1003 18:27:51.908549  287189 addons.go:238] Setting addon registry-creds=true in "addons-952140"
	I1003 18:27:51.908592  287189 host.go:66] Checking if "addons-952140" exists ...
	I1003 18:27:51.909212  287189 cli_runner.go:164] Run: docker container inspect addons-952140 --format={{.State.Status}}
	I1003 18:27:51.890143  287189 addons.go:69] Setting storage-provisioner=true in profile "addons-952140"
	I1003 18:27:51.927188  287189 addons.go:238] Setting addon storage-provisioner=true in "addons-952140"
	I1003 18:27:51.927240  287189 host.go:66] Checking if "addons-952140" exists ...
	I1003 18:27:51.927699  287189 cli_runner.go:164] Run: docker container inspect addons-952140 --format={{.State.Status}}
	I1003 18:27:51.890149  287189 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-952140"
	I1003 18:27:51.935395  287189 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-952140"
	I1003 18:27:51.935763  287189 cli_runner.go:164] Run: docker container inspect addons-952140 --format={{.State.Status}}
	I1003 18:27:51.890155  287189 addons.go:69] Setting volcano=true in profile "addons-952140"
	I1003 18:27:51.969696  287189 addons.go:238] Setting addon volcano=true in "addons-952140"
	I1003 18:27:51.969938  287189 host.go:66] Checking if "addons-952140" exists ...
	I1003 18:27:51.972839  287189 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1003 18:27:51.972995  287189 cli_runner.go:164] Run: docker container inspect addons-952140 --format={{.State.Status}}
	I1003 18:27:51.890259  287189 addons.go:69] Setting volumesnapshots=true in profile "addons-952140"
	I1003 18:27:51.890308  287189 addons.go:69] Setting ingress=true in profile "addons-952140"
	I1003 18:27:51.890312  287189 addons.go:69] Setting cloud-spanner=true in profile "addons-952140"
	I1003 18:27:51.890316  287189 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-952140"
	I1003 18:27:51.890418  287189 addons.go:69] Setting default-storageclass=true in profile "addons-952140"
	I1003 18:27:51.890426  287189 addons.go:69] Setting gcp-auth=true in profile "addons-952140"
	I1003 18:27:51.890433  287189 addons.go:69] Setting inspektor-gadget=true in profile "addons-952140"
	I1003 18:27:51.890439  287189 addons.go:69] Setting ingress-dns=true in profile "addons-952140"
	I1003 18:27:51.976824  287189 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1003 18:27:51.987378  287189 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1003 18:27:51.994747  287189 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1003 18:27:51.994856  287189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952140
	I1003 18:27:51.996592  287189 addons.go:238] Setting addon volumesnapshots=true in "addons-952140"
	I1003 18:27:51.996700  287189 host.go:66] Checking if "addons-952140" exists ...
	I1003 18:27:52.003447  287189 cli_runner.go:164] Run: docker container inspect addons-952140 --format={{.State.Status}}
	I1003 18:27:52.018177  287189 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1003 18:27:52.018262  287189 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1003 18:27:52.018343  287189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952140
	I1003 18:27:52.032572  287189 addons.go:238] Setting addon ingress=true in "addons-952140"
	I1003 18:27:52.032678  287189 host.go:66] Checking if "addons-952140" exists ...
	I1003 18:27:52.033217  287189 cli_runner.go:164] Run: docker container inspect addons-952140 --format={{.State.Status}}
	I1003 18:27:52.052853  287189 addons.go:238] Setting addon cloud-spanner=true in "addons-952140"
	I1003 18:27:52.052963  287189 host.go:66] Checking if "addons-952140" exists ...
	I1003 18:27:52.053519  287189 cli_runner.go:164] Run: docker container inspect addons-952140 --format={{.State.Status}}
	I1003 18:27:52.067832  287189 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-952140"
	I1003 18:27:52.067884  287189 host.go:66] Checking if "addons-952140" exists ...
	I1003 18:27:52.068341  287189 cli_runner.go:164] Run: docker container inspect addons-952140 --format={{.State.Status}}
	I1003 18:27:52.080977  287189 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-952140"
	I1003 18:27:52.081337  287189 cli_runner.go:164] Run: docker container inspect addons-952140 --format={{.State.Status}}
	I1003 18:27:52.100077  287189 mustload.go:65] Loading cluster: addons-952140
	I1003 18:27:52.100303  287189 config.go:182] Loaded profile config "addons-952140": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:27:52.100560  287189 cli_runner.go:164] Run: docker container inspect addons-952140 --format={{.State.Status}}
	I1003 18:27:52.103824  287189 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1003 18:27:52.114286  287189 addons.go:238] Setting addon inspektor-gadget=true in "addons-952140"
	I1003 18:27:52.114342  287189 host.go:66] Checking if "addons-952140" exists ...
	I1003 18:27:52.114830  287189 cli_runner.go:164] Run: docker container inspect addons-952140 --format={{.State.Status}}
	I1003 18:27:52.127892  287189 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1003 18:27:52.132090  287189 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1003 18:27:52.132129  287189 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1003 18:27:52.132192  287189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952140
	I1003 18:27:52.132895  287189 addons.go:238] Setting addon ingress-dns=true in "addons-952140"
	I1003 18:27:52.132954  287189 host.go:66] Checking if "addons-952140" exists ...
	I1003 18:27:52.133404  287189 cli_runner.go:164] Run: docker container inspect addons-952140 --format={{.State.Status}}
	I1003 18:27:52.154631  287189 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 18:27:52.155777  287189 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1003 18:27:52.163499  287189 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1003 18:27:52.163524  287189 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1003 18:27:52.163594  287189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952140
	I1003 18:27:52.164296  287189 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1003 18:27:52.164346  287189 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1003 18:27:52.164442  287189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952140
	I1003 18:27:52.207505  287189 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:27:52.207525  287189 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1003 18:27:52.207601  287189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952140
	I1003 18:27:52.242450  287189 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-952140"
	I1003 18:27:52.242489  287189 host.go:66] Checking if "addons-952140" exists ...
	I1003 18:27:52.242928  287189 cli_runner.go:164] Run: docker container inspect addons-952140 --format={{.State.Status}}
	I1003 18:27:52.292370  287189 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1003 18:27:52.306928  287189 out.go:179]   - Using image docker.io/registry:3.0.0
	I1003 18:27:52.312840  287189 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1003 18:27:52.312866  287189 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1003 18:27:52.312939  287189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952140
	W1003 18:27:52.323928  287189 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1003 18:27:52.324206  287189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/addons-952140/id_rsa Username:docker}
	I1003 18:27:52.325486  287189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/addons-952140/id_rsa Username:docker}
	I1003 18:27:52.348965  287189 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1003 18:27:52.350861  287189 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1003 18:27:52.350873  287189 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.41
	I1003 18:27:52.351954  287189 host.go:66] Checking if "addons-952140" exists ...
	I1003 18:27:52.380952  287189 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1003 18:27:52.380979  287189 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1003 18:27:52.381048  287189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952140
	I1003 18:27:52.381234  287189 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1003 18:27:52.381243  287189 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1003 18:27:52.381278  287189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952140
	I1003 18:27:52.396557  287189 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.1
	I1003 18:27:52.402774  287189 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1003 18:27:52.402799  287189 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1003 18:27:52.402875  287189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952140
	I1003 18:27:52.415824  287189 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1003 18:27:52.420777  287189 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1003 18:27:52.422999  287189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/addons-952140/id_rsa Username:docker}
	I1003 18:27:52.424533  287189 addons.go:238] Setting addon default-storageclass=true in "addons-952140"
	I1003 18:27:52.424573  287189 host.go:66] Checking if "addons-952140" exists ...
	I1003 18:27:52.425055  287189 cli_runner.go:164] Run: docker container inspect addons-952140 --format={{.State.Status}}
	I1003 18:27:52.433153  287189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/addons-952140/id_rsa Username:docker}
	I1003 18:27:52.433879  287189 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1003 18:27:52.434253  287189 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1003 18:27:52.434268  287189 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1003 18:27:52.434322  287189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952140
	I1003 18:27:52.442013  287189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/addons-952140/id_rsa Username:docker}
	I1003 18:27:52.443911  287189 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1003 18:27:52.444086  287189 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I1003 18:27:52.447971  287189 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1003 18:27:52.451623  287189 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1003 18:27:52.454684  287189 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1003 18:27:52.456916  287189 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1003 18:27:52.460168  287189 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1003 18:27:52.460174  287189 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1003 18:27:52.465012  287189 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1003 18:27:52.465038  287189 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1003 18:27:52.465112  287189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952140
	I1003 18:27:52.465434  287189 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1003 18:27:52.465471  287189 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1003 18:27:52.465545  287189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952140
	I1003 18:27:52.517675  287189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/addons-952140/id_rsa Username:docker}
	I1003 18:27:52.519654  287189 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1003 18:27:52.523358  287189 out.go:179]   - Using image docker.io/busybox:stable
	I1003 18:27:52.526904  287189 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1003 18:27:52.526927  287189 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1003 18:27:52.526999  287189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952140
	I1003 18:27:52.572059  287189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/addons-952140/id_rsa Username:docker}
	I1003 18:27:52.643523  287189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/addons-952140/id_rsa Username:docker}
	I1003 18:27:52.643955  287189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/addons-952140/id_rsa Username:docker}
	I1003 18:27:52.652045  287189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/addons-952140/id_rsa Username:docker}
	I1003 18:27:52.658421  287189 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1003 18:27:52.658441  287189 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1003 18:27:52.658502  287189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952140
	I1003 18:27:52.666270  287189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/addons-952140/id_rsa Username:docker}
	I1003 18:27:52.668267  287189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/addons-952140/id_rsa Username:docker}
	I1003 18:27:52.670682  287189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/addons-952140/id_rsa Username:docker}
	I1003 18:27:52.686351  287189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/addons-952140/id_rsa Username:docker}
	I1003 18:27:52.697005  287189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/addons-952140/id_rsa Username:docker}
	W1003 18:27:52.698237  287189 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1003 18:27:52.698274  287189 retry.go:31] will retry after 270.649643ms: ssh: handshake failed: EOF
	I1003 18:27:52.810020  287189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1003 18:27:52.810207  287189 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 18:27:53.027037  287189 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1003 18:27:53.027113  287189 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1003 18:27:53.075653  287189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1003 18:27:53.097995  287189 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1003 18:27:53.098068  287189 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1003 18:27:53.116878  287189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1003 18:27:53.144066  287189 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1003 18:27:53.144092  287189 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1003 18:27:53.153703  287189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:27:53.185518  287189 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1003 18:27:53.185591  287189 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1003 18:27:53.246312  287189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1003 18:27:53.253600  287189 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1003 18:27:53.253672  287189 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1003 18:27:53.302188  287189 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1003 18:27:53.302267  287189 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1003 18:27:53.313101  287189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1003 18:27:53.316534  287189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1003 18:27:53.330656  287189 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1003 18:27:53.330736  287189 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1003 18:27:53.342190  287189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1003 18:27:53.354262  287189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1003 18:27:53.360647  287189 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1003 18:27:53.360719  287189 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1003 18:27:53.400438  287189 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1003 18:27:53.400524  287189 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1003 18:27:53.445494  287189 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1003 18:27:53.445565  287189 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1003 18:27:53.477988  287189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1003 18:27:53.486360  287189 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1003 18:27:53.486437  287189 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1003 18:27:53.505686  287189 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1003 18:27:53.505763  287189 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1003 18:27:53.542487  287189 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1003 18:27:53.542563  287189 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1003 18:27:53.581967  287189 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1003 18:27:53.582048  287189 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1003 18:27:53.596563  287189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1003 18:27:53.676429  287189 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1003 18:27:53.676519  287189 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1003 18:27:53.681971  287189 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1003 18:27:53.682050  287189 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1003 18:27:53.716867  287189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:27:53.720430  287189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1003 18:27:53.782430  287189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1003 18:27:53.829023  287189 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1003 18:27:53.829048  287189 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1003 18:27:53.848637  287189 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1003 18:27:53.848659  287189 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1003 18:27:54.073054  287189 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1003 18:27:54.073124  287189 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1003 18:27:54.079394  287189 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1003 18:27:54.079474  287189 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1003 18:27:54.241287  287189 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1003 18:27:54.241358  287189 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1003 18:27:54.389742  287189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1003 18:27:54.439690  287189 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1003 18:27:54.439770  287189 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1003 18:27:54.507319  287189 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.69706511s)
	I1003 18:27:54.507569  287189 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.697471311s)
	I1003 18:27:54.507607  287189 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1003 18:27:54.508964  287189 node_ready.go:35] waiting up to 6m0s for node "addons-952140" to be "Ready" ...
	I1003 18:27:54.544022  287189 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1003 18:27:54.544098  287189 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1003 18:27:54.690492  287189 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.573578587s)
	I1003 18:27:54.690651  287189 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.614899295s)
	I1003 18:27:54.782010  287189 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1003 18:27:54.782082  287189 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1003 18:27:54.919707  287189 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1003 18:27:54.919793  287189 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1003 18:27:55.014540  287189 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-952140" context rescaled to 1 replicas
	I1003 18:27:55.165958  287189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1003 18:27:55.173633  287189 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.019846518s)
	I1003 18:27:56.258004  287189 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.944798296s)
	I1003 18:27:56.258108  287189 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.011726106s)
	W1003 18:27:56.540628  287189 node_ready.go:57] node "addons-952140" has "Ready":"False" status (will retry)
	I1003 18:27:57.014215  287189 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.697599318s)
	I1003 18:27:57.014553  287189 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.672255393s)
	I1003 18:27:58.008743  287189 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.65437879s)
	I1003 18:27:58.008774  287189 addons.go:479] Verifying addon ingress=true in "addons-952140"
	I1003 18:27:58.008992  287189 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.530928226s)
	I1003 18:27:58.009016  287189 addons.go:479] Verifying addon registry=true in "addons-952140"
	I1003 18:27:58.009463  287189 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.412822251s)
	I1003 18:27:58.009520  287189 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.289012177s)
	W1003 18:27:58.009536  287189 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:27:58.009606  287189 retry.go:31] will retry after 300.387324ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:27:58.009627  287189 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.227173637s)
	I1003 18:27:58.011292  287189 addons.go:479] Verifying addon metrics-server=true in "addons-952140"
	I1003 18:27:58.009700  287189 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.619882084s)
	W1003 18:27:58.011342  287189 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1003 18:27:58.011358  287189 retry.go:31] will retry after 355.343341ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1003 18:27:58.009483  287189 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.292531229s)
	I1003 18:27:58.012129  287189 out.go:179] * Verifying ingress addon...
	I1003 18:27:58.012267  287189 out.go:179] * Verifying registry addon...
	I1003 18:27:58.014455  287189 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-952140 service yakd-dashboard -n yakd-dashboard
	
	I1003 18:27:58.017905  287189 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1003 18:27:58.018864  287189 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1003 18:27:58.027930  287189 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1003 18:27:58.027952  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:27:58.028082  287189 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1003 18:27:58.028088  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:27:58.275135  287189 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.109081492s)
	I1003 18:27:58.275177  287189 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-952140"
	I1003 18:27:58.280306  287189 out.go:179] * Verifying csi-hostpath-driver addon...
	I1003 18:27:58.283872  287189 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1003 18:27:58.288693  287189 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1003 18:27:58.288717  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:27:58.311051  287189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1003 18:27:58.367593  287189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1003 18:27:58.522388  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:27:58.522818  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:27:58.794795  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1003 18:27:59.012799  287189 node_ready.go:57] node "addons-952140" has "Ready":"False" status (will retry)
	I1003 18:27:59.022519  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:27:59.022728  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1003 18:27:59.261341  287189 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:27:59.261370  287189 retry.go:31] will retry after 265.416503ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:27:59.288769  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:27:59.521994  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:27:59.522302  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:27:59.527367  287189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1003 18:27:59.787667  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:27:59.991207  287189 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1003 18:27:59.991329  287189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952140
	I1003 18:28:00.081208  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:00.081618  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:00.082836  287189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/addons-952140/id_rsa Username:docker}
	I1003 18:28:00.307615  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:00.362565  287189 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1003 18:28:00.412284  287189 addons.go:238] Setting addon gcp-auth=true in "addons-952140"
	I1003 18:28:00.412425  287189 host.go:66] Checking if "addons-952140" exists ...
	I1003 18:28:00.413052  287189 cli_runner.go:164] Run: docker container inspect addons-952140 --format={{.State.Status}}
	I1003 18:28:00.479093  287189 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1003 18:28:00.479172  287189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952140
	I1003 18:28:00.517149  287189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/addons-952140/id_rsa Username:docker}
	I1003 18:28:00.524391  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:00.525996  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:00.787202  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:01.022920  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:01.023632  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:01.143602  287189 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.775960858s)
	I1003 18:28:01.143693  287189 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.616295151s)
	W1003 18:28:01.143722  287189 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:28:01.143740  287189 retry.go:31] will retry after 481.74906ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:28:01.147010  287189 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1003 18:28:01.149975  287189 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1003 18:28:01.152817  287189 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1003 18:28:01.152846  287189 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1003 18:28:01.167943  287189 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1003 18:28:01.168011  287189 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1003 18:28:01.182137  287189 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1003 18:28:01.182163  287189 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1003 18:28:01.205064  287189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1003 18:28:01.288069  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1003 18:28:01.512338  287189 node_ready.go:57] node "addons-952140" has "Ready":"False" status (will retry)
	I1003 18:28:01.522804  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:01.524238  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:01.626453  287189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1003 18:28:01.736565  287189 addons.go:479] Verifying addon gcp-auth=true in "addons-952140"
	I1003 18:28:01.739884  287189 out.go:179] * Verifying gcp-auth addon...
	I1003 18:28:01.743860  287189 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1003 18:28:01.783528  287189 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1003 18:28:01.783554  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:01.792100  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:02.023535  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:02.023928  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:02.247467  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:02.287564  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:02.522950  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:02.523726  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1003 18:28:02.563938  287189 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:28:02.563986  287189 retry.go:31] will retry after 1.197531103s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:28:02.746958  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:02.786954  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:03.022082  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:03.022166  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:03.247305  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:03.287190  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:03.521321  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:03.521659  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:03.747037  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:03.762140  287189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1003 18:28:03.787851  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1003 18:28:04.013196  287189 node_ready.go:57] node "addons-952140" has "Ready":"False" status (will retry)
	I1003 18:28:04.022596  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:04.023648  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:04.249580  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:04.287741  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:04.522598  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:04.523334  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1003 18:28:04.572879  287189 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:28:04.572961  287189 retry.go:31] will retry after 1.579380909s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:28:04.746919  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:04.786871  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:05.021912  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:05.023447  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:05.247611  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:05.287483  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:05.521370  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:05.522304  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:05.747734  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:05.787451  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:06.022193  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:06.023199  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:06.152500  287189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1003 18:28:06.247685  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:06.287848  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1003 18:28:06.511877  287189 node_ready.go:57] node "addons-952140" has "Ready":"False" status (will retry)
	I1003 18:28:06.523485  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:06.524290  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:06.747336  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:06.787889  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1003 18:28:06.982775  287189 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:28:06.982870  287189 retry.go:31] will retry after 1.448783473s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:28:07.021477  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:07.021756  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:07.248226  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:07.287040  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:07.521169  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:07.522379  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:07.747842  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:07.787511  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:08.022213  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:08.022503  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:08.247598  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:08.286629  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:08.432873  287189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1003 18:28:08.512202  287189 node_ready.go:57] node "addons-952140" has "Ready":"False" status (will retry)
	I1003 18:28:08.521945  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:08.522780  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:08.746769  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:08.788572  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:09.023935  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:09.024029  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:09.247644  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1003 18:28:09.256162  287189 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:28:09.256196  287189 retry.go:31] will retry after 1.878991162s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:28:09.287260  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:09.521095  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:09.522355  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:09.748006  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:09.786558  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:10.022182  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:10.022409  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:10.247601  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:10.287529  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1003 18:28:10.512560  287189 node_ready.go:57] node "addons-952140" has "Ready":"False" status (will retry)
	I1003 18:28:10.521839  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:10.522330  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:10.747093  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:10.786717  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:11.022113  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:11.022177  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:11.135449  287189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1003 18:28:11.246908  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:11.287886  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:11.522493  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:11.523031  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:11.747872  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:11.787524  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1003 18:28:11.961235  287189 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:28:11.961269  287189 retry.go:31] will retry after 2.162467062s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:28:12.021606  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:12.021746  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:12.248279  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:12.286956  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:12.522126  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:12.521778  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:12.747785  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:12.787401  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1003 18:28:13.012118  287189 node_ready.go:57] node "addons-952140" has "Ready":"False" status (will retry)
	I1003 18:28:13.022176  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:13.022494  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:13.247698  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:13.286625  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:13.522247  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:13.522393  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:13.747718  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:13.787664  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:14.022341  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:14.022427  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:14.124780  287189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1003 18:28:14.247152  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:14.287673  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:14.522946  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:14.523503  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:14.747251  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:14.787613  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1003 18:28:14.947360  287189 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:28:14.947434  287189 retry.go:31] will retry after 3.350178966s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:28:15.013492  287189 node_ready.go:57] node "addons-952140" has "Ready":"False" status (will retry)
	I1003 18:28:15.022130  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:15.023627  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:15.246449  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:15.287396  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:15.521130  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:15.522855  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:15.747470  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:15.787235  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:16.022338  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:16.022722  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:16.247702  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:16.288097  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:16.521936  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:16.522000  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:16.746878  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:16.786693  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:17.022216  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:17.022370  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:17.247282  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:17.287172  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1003 18:28:17.512420  287189 node_ready.go:57] node "addons-952140" has "Ready":"False" status (will retry)
	I1003 18:28:17.522147  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:17.523254  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:17.748201  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:17.786715  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:18.022026  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:18.022243  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:18.247170  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:18.287108  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:18.298223  287189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1003 18:28:18.522459  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:18.523036  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:18.747133  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:18.787554  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:19.021580  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:19.024153  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1003 18:28:19.082657  287189 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:28:19.082690  287189 retry.go:31] will retry after 8.00452608s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:28:19.247467  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:19.287265  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:19.521832  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:19.521994  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:19.747298  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:19.787000  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1003 18:28:20.012225  287189 node_ready.go:57] node "addons-952140" has "Ready":"False" status (will retry)
	I1003 18:28:20.021587  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:20.023010  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:20.247027  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:20.286731  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:20.522010  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:20.522059  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:20.746778  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:20.787870  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:21.021560  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:21.022468  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:21.247878  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:21.287797  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:21.521383  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:21.522209  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:21.748428  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:21.787587  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1003 18:28:22.012791  287189 node_ready.go:57] node "addons-952140" has "Ready":"False" status (will retry)
	I1003 18:28:22.021871  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:22.022012  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:22.246970  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:22.287845  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:22.521771  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:22.522135  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:22.746808  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:22.786793  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:23.021210  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:23.022927  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:23.246636  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:23.287512  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:23.521736  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:23.521933  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:23.747604  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:23.787498  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:24.021216  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:24.022623  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:24.246624  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:24.287639  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1003 18:28:24.512705  287189 node_ready.go:57] node "addons-952140" has "Ready":"False" status (will retry)
	I1003 18:28:24.521976  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:24.522132  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:24.747212  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:24.787009  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:25.021397  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:25.021642  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:25.246783  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:25.287413  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:25.521179  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:25.522394  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:25.747653  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:25.787543  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:26.022154  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:26.022221  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:26.247260  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:26.286891  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1003 18:28:26.512774  287189 node_ready.go:57] node "addons-952140" has "Ready":"False" status (will retry)
	I1003 18:28:26.521875  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:26.521993  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:26.747055  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:26.786713  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:27.021725  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:27.021787  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:27.087733  287189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1003 18:28:27.246974  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:27.287233  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:27.525261  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:27.525460  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:27.747525  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:27.787693  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1003 18:28:27.906544  287189 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:28:27.906600  287189 retry.go:31] will retry after 20.407055858s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:28:28.022271  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:28.022356  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:28.247199  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:28.287552  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:28.522050  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:28.522140  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:28.746832  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:28.787925  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1003 18:28:29.012969  287189 node_ready.go:57] node "addons-952140" has "Ready":"False" status (will retry)
	I1003 18:28:29.020558  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:29.021863  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:29.246833  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:29.287871  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:29.521888  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:29.522204  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:29.746900  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:29.787618  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:30.031510  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:30.031303  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:30.247699  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:30.287420  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:30.521988  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:30.522000  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:30.747051  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:30.787512  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:31.021884  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:31.022486  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:31.247471  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:31.287075  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1003 18:28:31.511903  287189 node_ready.go:57] node "addons-952140" has "Ready":"False" status (will retry)
	I1003 18:28:31.522171  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:31.522282  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:31.747308  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:31.787167  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:32.021610  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:32.022085  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:32.247016  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:32.286673  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:32.521875  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:32.522025  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:32.747191  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:32.787988  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:33.028936  287189 node_ready.go:49] node "addons-952140" is "Ready"
	I1003 18:28:33.028976  287189 node_ready.go:38] duration metric: took 38.519936489s for node "addons-952140" to be "Ready" ...
	I1003 18:28:33.028991  287189 api_server.go:52] waiting for apiserver process to appear ...
	I1003 18:28:33.029089  287189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:28:33.031392  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:33.031835  287189 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1003 18:28:33.031854  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:33.044897  287189 api_server.go:72] duration metric: took 41.158104919s to wait for apiserver process to appear ...
	I1003 18:28:33.044941  287189 api_server.go:88] waiting for apiserver healthz status ...
	I1003 18:28:33.044979  287189 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1003 18:28:33.059321  287189 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1003 18:28:33.067290  287189 api_server.go:141] control plane version: v1.34.1
	I1003 18:28:33.067327  287189 api_server.go:131] duration metric: took 22.377923ms to wait for apiserver health ...
	I1003 18:28:33.067336  287189 system_pods.go:43] waiting for kube-system pods to appear ...
	I1003 18:28:33.147794  287189 system_pods.go:59] 19 kube-system pods found
	I1003 18:28:33.147843  287189 system_pods.go:61] "coredns-66bc5c9577-2hhqm" [daea3b45-b31f-453a-80f5-c30f7fce4122] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1003 18:28:33.147850  287189 system_pods.go:61] "csi-hostpath-attacher-0" [376ecb21-1ca4-4f77-bac5-a4b5af7ccfdd] Pending
	I1003 18:28:33.147879  287189 system_pods.go:61] "csi-hostpath-resizer-0" [8450e23b-d7f0-4b50-a20c-a7fc38411191] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1003 18:28:33.147905  287189 system_pods.go:61] "csi-hostpathplugin-vsbgb" [e6597406-4522-46da-ad41-da01126918f9] Pending
	I1003 18:28:33.147918  287189 system_pods.go:61] "etcd-addons-952140" [6c2991f4-ee56-4fbb-8f55-bf86ce3c8bc3] Running
	I1003 18:28:33.147924  287189 system_pods.go:61] "kindnet-vx5lb" [39f3102d-aa3a-4a72-b884-4fcf57878faf] Running
	I1003 18:28:33.147937  287189 system_pods.go:61] "kube-apiserver-addons-952140" [70a4748f-eedd-41aa-8ade-b8d13f6c85fe] Running
	I1003 18:28:33.147943  287189 system_pods.go:61] "kube-controller-manager-addons-952140" [7710c628-50b2-44d1-9faa-7ba463e404c9] Running
	I1003 18:28:33.147948  287189 system_pods.go:61] "kube-ingress-dns-minikube" [fbc268d3-be63-48bd-a93c-f3466f7458ed] Pending
	I1003 18:28:33.147952  287189 system_pods.go:61] "kube-proxy-5hd7r" [674b4e86-cafa-4e3f-8b57-719de4a646f5] Running
	I1003 18:28:33.147962  287189 system_pods.go:61] "kube-scheduler-addons-952140" [ef6d468e-24f4-474f-adeb-1d9e9cf74c87] Running
	I1003 18:28:33.147988  287189 system_pods.go:61] "metrics-server-85b7d694d7-tscmk" [51883ecf-f53c-4001-af25-5785ed3fa7db] Pending
	I1003 18:28:33.147994  287189 system_pods.go:61] "nvidia-device-plugin-daemonset-84v2d" [c0869084-f969-40cf-8475-57eedeb02a93] Pending
	I1003 18:28:33.148009  287189 system_pods.go:61] "registry-66898fdd98-88sgc" [749ffc38-9d67-4777-b96d-422ce39f2b46] Pending
	I1003 18:28:33.148021  287189 system_pods.go:61] "registry-creds-764b6fb674-dqntl" [57dce88b-cd6c-4f39-babf-2079e2174e05] Pending
	I1003 18:28:33.148027  287189 system_pods.go:61] "registry-proxy-4nwwr" [5ad2d6c8-13b3-4729-a243-b2881c6c7d2b] Pending
	I1003 18:28:33.148036  287189 system_pods.go:61] "snapshot-controller-7d9fbc56b8-ct6ht" [90ef8c16-dc3b-446a-b290-7b60cc11a9de] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1003 18:28:33.148054  287189 system_pods.go:61] "snapshot-controller-7d9fbc56b8-k5rg9" [1c8643cd-15e1-4798-916a-253affe08a69] Pending
	I1003 18:28:33.148065  287189 system_pods.go:61] "storage-provisioner" [7632d49f-2ddc-429b-a88b-02e68f1b42e3] Pending
	I1003 18:28:33.148071  287189 system_pods.go:74] duration metric: took 80.729026ms to wait for pod list to return data ...
	I1003 18:28:33.148096  287189 default_sa.go:34] waiting for default service account to be created ...
	I1003 18:28:33.191788  287189 default_sa.go:45] found service account: "default"
	I1003 18:28:33.191824  287189 default_sa.go:55] duration metric: took 43.720773ms for default service account to be created ...
	I1003 18:28:33.191835  287189 system_pods.go:116] waiting for k8s-apps to be running ...
	I1003 18:28:33.329884  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:33.330213  287189 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1003 18:28:33.330240  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:33.330792  287189 system_pods.go:86] 19 kube-system pods found
	I1003 18:28:33.330821  287189 system_pods.go:89] "coredns-66bc5c9577-2hhqm" [daea3b45-b31f-453a-80f5-c30f7fce4122] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1003 18:28:33.330828  287189 system_pods.go:89] "csi-hostpath-attacher-0" [376ecb21-1ca4-4f77-bac5-a4b5af7ccfdd] Pending
	I1003 18:28:33.330843  287189 system_pods.go:89] "csi-hostpath-resizer-0" [8450e23b-d7f0-4b50-a20c-a7fc38411191] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1003 18:28:33.330852  287189 system_pods.go:89] "csi-hostpathplugin-vsbgb" [e6597406-4522-46da-ad41-da01126918f9] Pending
	I1003 18:28:33.330859  287189 system_pods.go:89] "etcd-addons-952140" [6c2991f4-ee56-4fbb-8f55-bf86ce3c8bc3] Running
	I1003 18:28:33.330863  287189 system_pods.go:89] "kindnet-vx5lb" [39f3102d-aa3a-4a72-b884-4fcf57878faf] Running
	I1003 18:28:33.330867  287189 system_pods.go:89] "kube-apiserver-addons-952140" [70a4748f-eedd-41aa-8ade-b8d13f6c85fe] Running
	I1003 18:28:33.330872  287189 system_pods.go:89] "kube-controller-manager-addons-952140" [7710c628-50b2-44d1-9faa-7ba463e404c9] Running
	I1003 18:28:33.330883  287189 system_pods.go:89] "kube-ingress-dns-minikube" [fbc268d3-be63-48bd-a93c-f3466f7458ed] Pending
	I1003 18:28:33.330887  287189 system_pods.go:89] "kube-proxy-5hd7r" [674b4e86-cafa-4e3f-8b57-719de4a646f5] Running
	I1003 18:28:33.330891  287189 system_pods.go:89] "kube-scheduler-addons-952140" [ef6d468e-24f4-474f-adeb-1d9e9cf74c87] Running
	I1003 18:28:33.330895  287189 system_pods.go:89] "metrics-server-85b7d694d7-tscmk" [51883ecf-f53c-4001-af25-5785ed3fa7db] Pending
	I1003 18:28:33.330905  287189 system_pods.go:89] "nvidia-device-plugin-daemonset-84v2d" [c0869084-f969-40cf-8475-57eedeb02a93] Pending
	I1003 18:28:33.330909  287189 system_pods.go:89] "registry-66898fdd98-88sgc" [749ffc38-9d67-4777-b96d-422ce39f2b46] Pending
	I1003 18:28:33.330920  287189 system_pods.go:89] "registry-creds-764b6fb674-dqntl" [57dce88b-cd6c-4f39-babf-2079e2174e05] Pending
	I1003 18:28:33.330925  287189 system_pods.go:89] "registry-proxy-4nwwr" [5ad2d6c8-13b3-4729-a243-b2881c6c7d2b] Pending
	I1003 18:28:33.330934  287189 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ct6ht" [90ef8c16-dc3b-446a-b290-7b60cc11a9de] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1003 18:28:33.330938  287189 system_pods.go:89] "snapshot-controller-7d9fbc56b8-k5rg9" [1c8643cd-15e1-4798-916a-253affe08a69] Pending
	I1003 18:28:33.330944  287189 system_pods.go:89] "storage-provisioner" [7632d49f-2ddc-429b-a88b-02e68f1b42e3] Pending
	I1003 18:28:33.330958  287189 retry.go:31] will retry after 207.53529ms: missing components: kube-dns
	I1003 18:28:33.525198  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:33.525665  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:33.554907  287189 system_pods.go:86] 19 kube-system pods found
	I1003 18:28:33.554969  287189 system_pods.go:89] "coredns-66bc5c9577-2hhqm" [daea3b45-b31f-453a-80f5-c30f7fce4122] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1003 18:28:33.554981  287189 system_pods.go:89] "csi-hostpath-attacher-0" [376ecb21-1ca4-4f77-bac5-a4b5af7ccfdd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1003 18:28:33.554990  287189 system_pods.go:89] "csi-hostpath-resizer-0" [8450e23b-d7f0-4b50-a20c-a7fc38411191] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1003 18:28:33.555003  287189 system_pods.go:89] "csi-hostpathplugin-vsbgb" [e6597406-4522-46da-ad41-da01126918f9] Pending
	I1003 18:28:33.555026  287189 system_pods.go:89] "etcd-addons-952140" [6c2991f4-ee56-4fbb-8f55-bf86ce3c8bc3] Running
	I1003 18:28:33.555032  287189 system_pods.go:89] "kindnet-vx5lb" [39f3102d-aa3a-4a72-b884-4fcf57878faf] Running
	I1003 18:28:33.555042  287189 system_pods.go:89] "kube-apiserver-addons-952140" [70a4748f-eedd-41aa-8ade-b8d13f6c85fe] Running
	I1003 18:28:33.555047  287189 system_pods.go:89] "kube-controller-manager-addons-952140" [7710c628-50b2-44d1-9faa-7ba463e404c9] Running
	I1003 18:28:33.555063  287189 system_pods.go:89] "kube-ingress-dns-minikube" [fbc268d3-be63-48bd-a93c-f3466f7458ed] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1003 18:28:33.555079  287189 system_pods.go:89] "kube-proxy-5hd7r" [674b4e86-cafa-4e3f-8b57-719de4a646f5] Running
	I1003 18:28:33.555085  287189 system_pods.go:89] "kube-scheduler-addons-952140" [ef6d468e-24f4-474f-adeb-1d9e9cf74c87] Running
	I1003 18:28:33.555102  287189 system_pods.go:89] "metrics-server-85b7d694d7-tscmk" [51883ecf-f53c-4001-af25-5785ed3fa7db] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1003 18:28:33.555118  287189 system_pods.go:89] "nvidia-device-plugin-daemonset-84v2d" [c0869084-f969-40cf-8475-57eedeb02a93] Pending
	I1003 18:28:33.555132  287189 system_pods.go:89] "registry-66898fdd98-88sgc" [749ffc38-9d67-4777-b96d-422ce39f2b46] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1003 18:28:33.555141  287189 system_pods.go:89] "registry-creds-764b6fb674-dqntl" [57dce88b-cd6c-4f39-babf-2079e2174e05] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1003 18:28:33.555157  287189 system_pods.go:89] "registry-proxy-4nwwr" [5ad2d6c8-13b3-4729-a243-b2881c6c7d2b] Pending
	I1003 18:28:33.555164  287189 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ct6ht" [90ef8c16-dc3b-446a-b290-7b60cc11a9de] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1003 18:28:33.555175  287189 system_pods.go:89] "snapshot-controller-7d9fbc56b8-k5rg9" [1c8643cd-15e1-4798-916a-253affe08a69] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1003 18:28:33.555189  287189 system_pods.go:89] "storage-provisioner" [7632d49f-2ddc-429b-a88b-02e68f1b42e3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1003 18:28:33.555217  287189 retry.go:31] will retry after 295.743819ms: missing components: kube-dns
	I1003 18:28:33.747816  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:33.849840  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:33.952057  287189 system_pods.go:86] 19 kube-system pods found
	I1003 18:28:33.952103  287189 system_pods.go:89] "coredns-66bc5c9577-2hhqm" [daea3b45-b31f-453a-80f5-c30f7fce4122] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1003 18:28:33.952117  287189 system_pods.go:89] "csi-hostpath-attacher-0" [376ecb21-1ca4-4f77-bac5-a4b5af7ccfdd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1003 18:28:33.952124  287189 system_pods.go:89] "csi-hostpath-resizer-0" [8450e23b-d7f0-4b50-a20c-a7fc38411191] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1003 18:28:33.952131  287189 system_pods.go:89] "csi-hostpathplugin-vsbgb" [e6597406-4522-46da-ad41-da01126918f9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1003 18:28:33.952139  287189 system_pods.go:89] "etcd-addons-952140" [6c2991f4-ee56-4fbb-8f55-bf86ce3c8bc3] Running
	I1003 18:28:33.952155  287189 system_pods.go:89] "kindnet-vx5lb" [39f3102d-aa3a-4a72-b884-4fcf57878faf] Running
	I1003 18:28:33.952164  287189 system_pods.go:89] "kube-apiserver-addons-952140" [70a4748f-eedd-41aa-8ade-b8d13f6c85fe] Running
	I1003 18:28:33.952174  287189 system_pods.go:89] "kube-controller-manager-addons-952140" [7710c628-50b2-44d1-9faa-7ba463e404c9] Running
	I1003 18:28:33.952187  287189 system_pods.go:89] "kube-ingress-dns-minikube" [fbc268d3-be63-48bd-a93c-f3466f7458ed] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1003 18:28:33.952191  287189 system_pods.go:89] "kube-proxy-5hd7r" [674b4e86-cafa-4e3f-8b57-719de4a646f5] Running
	I1003 18:28:33.952197  287189 system_pods.go:89] "kube-scheduler-addons-952140" [ef6d468e-24f4-474f-adeb-1d9e9cf74c87] Running
	I1003 18:28:33.952204  287189 system_pods.go:89] "metrics-server-85b7d694d7-tscmk" [51883ecf-f53c-4001-af25-5785ed3fa7db] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1003 18:28:33.952227  287189 system_pods.go:89] "nvidia-device-plugin-daemonset-84v2d" [c0869084-f969-40cf-8475-57eedeb02a93] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1003 18:28:33.952240  287189 system_pods.go:89] "registry-66898fdd98-88sgc" [749ffc38-9d67-4777-b96d-422ce39f2b46] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1003 18:28:33.952250  287189 system_pods.go:89] "registry-creds-764b6fb674-dqntl" [57dce88b-cd6c-4f39-babf-2079e2174e05] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1003 18:28:33.952264  287189 system_pods.go:89] "registry-proxy-4nwwr" [5ad2d6c8-13b3-4729-a243-b2881c6c7d2b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1003 18:28:33.952272  287189 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ct6ht" [90ef8c16-dc3b-446a-b290-7b60cc11a9de] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1003 18:28:33.952283  287189 system_pods.go:89] "snapshot-controller-7d9fbc56b8-k5rg9" [1c8643cd-15e1-4798-916a-253affe08a69] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1003 18:28:33.952289  287189 system_pods.go:89] "storage-provisioner" [7632d49f-2ddc-429b-a88b-02e68f1b42e3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1003 18:28:33.952312  287189 retry.go:31] will retry after 463.876902ms: missing components: kube-dns
	I1003 18:28:34.051191  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:34.051330  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:34.247166  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:34.287672  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:34.421608  287189 system_pods.go:86] 19 kube-system pods found
	I1003 18:28:34.421643  287189 system_pods.go:89] "coredns-66bc5c9577-2hhqm" [daea3b45-b31f-453a-80f5-c30f7fce4122] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1003 18:28:34.421652  287189 system_pods.go:89] "csi-hostpath-attacher-0" [376ecb21-1ca4-4f77-bac5-a4b5af7ccfdd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1003 18:28:34.421660  287189 system_pods.go:89] "csi-hostpath-resizer-0" [8450e23b-d7f0-4b50-a20c-a7fc38411191] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1003 18:28:34.421675  287189 system_pods.go:89] "csi-hostpathplugin-vsbgb" [e6597406-4522-46da-ad41-da01126918f9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1003 18:28:34.421683  287189 system_pods.go:89] "etcd-addons-952140" [6c2991f4-ee56-4fbb-8f55-bf86ce3c8bc3] Running
	I1003 18:28:34.421689  287189 system_pods.go:89] "kindnet-vx5lb" [39f3102d-aa3a-4a72-b884-4fcf57878faf] Running
	I1003 18:28:34.421700  287189 system_pods.go:89] "kube-apiserver-addons-952140" [70a4748f-eedd-41aa-8ade-b8d13f6c85fe] Running
	I1003 18:28:34.421704  287189 system_pods.go:89] "kube-controller-manager-addons-952140" [7710c628-50b2-44d1-9faa-7ba463e404c9] Running
	I1003 18:28:34.421711  287189 system_pods.go:89] "kube-ingress-dns-minikube" [fbc268d3-be63-48bd-a93c-f3466f7458ed] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1003 18:28:34.421720  287189 system_pods.go:89] "kube-proxy-5hd7r" [674b4e86-cafa-4e3f-8b57-719de4a646f5] Running
	I1003 18:28:34.421724  287189 system_pods.go:89] "kube-scheduler-addons-952140" [ef6d468e-24f4-474f-adeb-1d9e9cf74c87] Running
	I1003 18:28:34.421732  287189 system_pods.go:89] "metrics-server-85b7d694d7-tscmk" [51883ecf-f53c-4001-af25-5785ed3fa7db] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1003 18:28:34.421752  287189 system_pods.go:89] "nvidia-device-plugin-daemonset-84v2d" [c0869084-f969-40cf-8475-57eedeb02a93] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1003 18:28:34.421760  287189 system_pods.go:89] "registry-66898fdd98-88sgc" [749ffc38-9d67-4777-b96d-422ce39f2b46] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1003 18:28:34.421768  287189 system_pods.go:89] "registry-creds-764b6fb674-dqntl" [57dce88b-cd6c-4f39-babf-2079e2174e05] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1003 18:28:34.421780  287189 system_pods.go:89] "registry-proxy-4nwwr" [5ad2d6c8-13b3-4729-a243-b2881c6c7d2b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1003 18:28:34.421786  287189 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ct6ht" [90ef8c16-dc3b-446a-b290-7b60cc11a9de] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1003 18:28:34.421792  287189 system_pods.go:89] "snapshot-controller-7d9fbc56b8-k5rg9" [1c8643cd-15e1-4798-916a-253affe08a69] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1003 18:28:34.421800  287189 system_pods.go:89] "storage-provisioner" [7632d49f-2ddc-429b-a88b-02e68f1b42e3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1003 18:28:34.421826  287189 retry.go:31] will retry after 374.526593ms: missing components: kube-dns
	I1003 18:28:34.522771  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:34.523195  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:34.748517  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:34.788246  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:34.850410  287189 system_pods.go:86] 19 kube-system pods found
	I1003 18:28:34.850500  287189 system_pods.go:89] "coredns-66bc5c9577-2hhqm" [daea3b45-b31f-453a-80f5-c30f7fce4122] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1003 18:28:34.850525  287189 system_pods.go:89] "csi-hostpath-attacher-0" [376ecb21-1ca4-4f77-bac5-a4b5af7ccfdd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1003 18:28:34.850563  287189 system_pods.go:89] "csi-hostpath-resizer-0" [8450e23b-d7f0-4b50-a20c-a7fc38411191] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1003 18:28:34.850590  287189 system_pods.go:89] "csi-hostpathplugin-vsbgb" [e6597406-4522-46da-ad41-da01126918f9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1003 18:28:34.850619  287189 system_pods.go:89] "etcd-addons-952140" [6c2991f4-ee56-4fbb-8f55-bf86ce3c8bc3] Running
	I1003 18:28:34.850640  287189 system_pods.go:89] "kindnet-vx5lb" [39f3102d-aa3a-4a72-b884-4fcf57878faf] Running
	I1003 18:28:34.850670  287189 system_pods.go:89] "kube-apiserver-addons-952140" [70a4748f-eedd-41aa-8ade-b8d13f6c85fe] Running
	I1003 18:28:34.850698  287189 system_pods.go:89] "kube-controller-manager-addons-952140" [7710c628-50b2-44d1-9faa-7ba463e404c9] Running
	I1003 18:28:34.850722  287189 system_pods.go:89] "kube-ingress-dns-minikube" [fbc268d3-be63-48bd-a93c-f3466f7458ed] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1003 18:28:34.850740  287189 system_pods.go:89] "kube-proxy-5hd7r" [674b4e86-cafa-4e3f-8b57-719de4a646f5] Running
	I1003 18:28:34.850774  287189 system_pods.go:89] "kube-scheduler-addons-952140" [ef6d468e-24f4-474f-adeb-1d9e9cf74c87] Running
	I1003 18:28:34.850800  287189 system_pods.go:89] "metrics-server-85b7d694d7-tscmk" [51883ecf-f53c-4001-af25-5785ed3fa7db] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1003 18:28:34.850823  287189 system_pods.go:89] "nvidia-device-plugin-daemonset-84v2d" [c0869084-f969-40cf-8475-57eedeb02a93] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1003 18:28:34.850843  287189 system_pods.go:89] "registry-66898fdd98-88sgc" [749ffc38-9d67-4777-b96d-422ce39f2b46] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1003 18:28:34.850875  287189 system_pods.go:89] "registry-creds-764b6fb674-dqntl" [57dce88b-cd6c-4f39-babf-2079e2174e05] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1003 18:28:34.850899  287189 system_pods.go:89] "registry-proxy-4nwwr" [5ad2d6c8-13b3-4729-a243-b2881c6c7d2b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1003 18:28:34.850917  287189 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ct6ht" [90ef8c16-dc3b-446a-b290-7b60cc11a9de] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1003 18:28:34.850937  287189 system_pods.go:89] "snapshot-controller-7d9fbc56b8-k5rg9" [1c8643cd-15e1-4798-916a-253affe08a69] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1003 18:28:34.850957  287189 system_pods.go:89] "storage-provisioner" [7632d49f-2ddc-429b-a88b-02e68f1b42e3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1003 18:28:34.850996  287189 retry.go:31] will retry after 632.453233ms: missing components: kube-dns
	I1003 18:28:35.023288  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:35.023804  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:35.247576  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:35.288178  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:35.489626  287189 system_pods.go:86] 19 kube-system pods found
	I1003 18:28:35.489676  287189 system_pods.go:89] "coredns-66bc5c9577-2hhqm" [daea3b45-b31f-453a-80f5-c30f7fce4122] Running
	I1003 18:28:35.489689  287189 system_pods.go:89] "csi-hostpath-attacher-0" [376ecb21-1ca4-4f77-bac5-a4b5af7ccfdd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1003 18:28:35.489699  287189 system_pods.go:89] "csi-hostpath-resizer-0" [8450e23b-d7f0-4b50-a20c-a7fc38411191] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1003 18:28:35.489716  287189 system_pods.go:89] "csi-hostpathplugin-vsbgb" [e6597406-4522-46da-ad41-da01126918f9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1003 18:28:35.489731  287189 system_pods.go:89] "etcd-addons-952140" [6c2991f4-ee56-4fbb-8f55-bf86ce3c8bc3] Running
	I1003 18:28:35.489743  287189 system_pods.go:89] "kindnet-vx5lb" [39f3102d-aa3a-4a72-b884-4fcf57878faf] Running
	I1003 18:28:35.489748  287189 system_pods.go:89] "kube-apiserver-addons-952140" [70a4748f-eedd-41aa-8ade-b8d13f6c85fe] Running
	I1003 18:28:35.489752  287189 system_pods.go:89] "kube-controller-manager-addons-952140" [7710c628-50b2-44d1-9faa-7ba463e404c9] Running
	I1003 18:28:35.489762  287189 system_pods.go:89] "kube-ingress-dns-minikube" [fbc268d3-be63-48bd-a93c-f3466f7458ed] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1003 18:28:35.489778  287189 system_pods.go:89] "kube-proxy-5hd7r" [674b4e86-cafa-4e3f-8b57-719de4a646f5] Running
	I1003 18:28:35.489785  287189 system_pods.go:89] "kube-scheduler-addons-952140" [ef6d468e-24f4-474f-adeb-1d9e9cf74c87] Running
	I1003 18:28:35.489791  287189 system_pods.go:89] "metrics-server-85b7d694d7-tscmk" [51883ecf-f53c-4001-af25-5785ed3fa7db] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1003 18:28:35.489799  287189 system_pods.go:89] "nvidia-device-plugin-daemonset-84v2d" [c0869084-f969-40cf-8475-57eedeb02a93] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1003 18:28:35.489809  287189 system_pods.go:89] "registry-66898fdd98-88sgc" [749ffc38-9d67-4777-b96d-422ce39f2b46] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1003 18:28:35.489822  287189 system_pods.go:89] "registry-creds-764b6fb674-dqntl" [57dce88b-cd6c-4f39-babf-2079e2174e05] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1003 18:28:35.489827  287189 system_pods.go:89] "registry-proxy-4nwwr" [5ad2d6c8-13b3-4729-a243-b2881c6c7d2b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1003 18:28:35.489834  287189 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ct6ht" [90ef8c16-dc3b-446a-b290-7b60cc11a9de] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1003 18:28:35.489859  287189 system_pods.go:89] "snapshot-controller-7d9fbc56b8-k5rg9" [1c8643cd-15e1-4798-916a-253affe08a69] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1003 18:28:35.489869  287189 system_pods.go:89] "storage-provisioner" [7632d49f-2ddc-429b-a88b-02e68f1b42e3] Running
	I1003 18:28:35.489878  287189 system_pods.go:126] duration metric: took 2.298036212s to wait for k8s-apps to be running ...
	I1003 18:28:35.489888  287189 system_svc.go:44] waiting for kubelet service to be running ....
	I1003 18:28:35.489973  287189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 18:28:35.505835  287189 system_svc.go:56] duration metric: took 15.932073ms WaitForService to wait for kubelet
	I1003 18:28:35.505921  287189 kubeadm.go:586] duration metric: took 43.619137647s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 18:28:35.505955  287189 node_conditions.go:102] verifying NodePressure condition ...
	I1003 18:28:35.509616  287189 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1003 18:28:35.509695  287189 node_conditions.go:123] node cpu capacity is 2
	I1003 18:28:35.509723  287189 node_conditions.go:105] duration metric: took 3.736202ms to run NodePressure ...
	I1003 18:28:35.509763  287189 start.go:241] waiting for startup goroutines ...
	I1003 18:28:35.523605  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:35.524693  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:35.748072  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:35.853007  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:36.021062  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:36.022187  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:36.247549  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:36.287585  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:36.523033  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:36.523339  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:36.747950  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:36.787618  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:37.026567  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:37.026840  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:37.247118  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:37.287975  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:37.521479  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:37.521873  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:37.747527  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:37.787986  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:38.024451  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:38.024785  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:38.247882  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:38.287612  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:38.522576  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:38.522806  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:38.748179  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:38.787220  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:39.029453  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:39.029500  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:39.247691  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:39.287834  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:39.523239  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:39.523522  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:39.747908  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:39.787841  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:40.024756  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:40.025306  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:40.248152  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:40.287636  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:40.521526  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:40.521669  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:40.747686  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:40.788079  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:41.026921  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:41.027340  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:41.247966  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:41.287776  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:41.522160  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:41.522793  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:41.746938  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:41.787254  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:42.023432  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:42.024609  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:42.249542  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:42.349312  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:42.521949  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:42.522412  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:42.747480  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:42.788127  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:43.021932  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:43.023247  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:43.248100  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:43.287988  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:43.521705  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:43.522213  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:43.747373  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:43.787795  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:44.022579  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:44.023067  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:44.248345  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:44.287912  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:44.520924  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:44.523144  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:44.747261  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:44.787397  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:45.037386  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:45.045151  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:45.248613  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:45.291583  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:45.524043  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:45.524640  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:45.747821  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:45.787935  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:46.024100  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:46.024598  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:46.248066  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:46.287633  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:46.525795  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:46.526258  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:46.747700  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:46.787797  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:47.022727  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:47.023236  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:47.247367  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:47.287515  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:47.522252  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:47.522617  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:47.747538  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:47.787580  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:48.022780  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:48.023565  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:48.248440  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:48.288379  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:48.313869  287189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1003 18:28:48.523570  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:48.524016  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:48.747168  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:48.787681  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:49.034754  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:49.036447  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:49.246998  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:49.287887  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:49.413195  287189 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.099286831s)
	W1003 18:28:49.413234  287189 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:28:49.413258  287189 retry.go:31] will retry after 31.075300228s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:28:49.521991  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:49.522124  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:49.747143  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:49.787107  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:50.024250  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:50.024390  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:50.247749  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:50.287449  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:50.523574  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:50.523711  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:50.747046  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:50.787786  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:51.022990  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:51.023160  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:51.248255  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:51.287849  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:51.522253  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:51.523809  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:51.747032  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:51.787694  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:52.021379  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:52.024131  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:52.247633  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:52.289591  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:52.527843  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:52.528837  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:52.747171  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:52.787820  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:53.024000  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:53.024327  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:53.249103  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:53.289677  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:53.522886  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:53.523025  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:53.747309  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:53.788164  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:54.024120  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:54.024429  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:54.255232  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:54.287561  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:54.522688  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:54.523792  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:54.746720  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:54.788064  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:55.025952  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:55.026477  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:55.253046  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:55.298189  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:55.524595  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:55.525002  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:55.748425  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:55.788062  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:56.023150  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:56.023518  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:56.247541  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:56.287976  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:56.521760  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:56.523005  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:56.747368  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:56.787889  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:57.023599  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:57.024018  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:57.247400  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:57.287794  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:57.524230  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:57.524699  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:57.747683  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:57.787732  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:58.036025  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:58.036550  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:58.248530  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:58.287925  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:58.521462  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:58.523214  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:58.747184  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:58.788595  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:59.022257  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:59.023503  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:59.247419  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:59.288057  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:59.523209  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:59.525605  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:59.746930  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:59.788118  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:00.023123  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:00.024415  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:29:00.288056  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:00.306708  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:00.522876  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:29:00.523023  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:00.747568  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:00.787842  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:01.020754  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:01.022678  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:29:01.247382  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:01.287858  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:01.523169  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:29:01.523746  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:01.747131  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:01.787455  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:02.022655  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:29:02.022850  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:02.247565  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:02.288473  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:02.524164  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:02.524802  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:29:02.747204  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:02.787719  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:03.021247  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:03.022948  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:29:03.246659  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:03.287356  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:03.523598  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:03.525784  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:29:03.750285  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:03.788289  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:04.022583  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:29:04.022764  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:04.246642  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:04.288818  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:04.523687  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:04.524279  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:29:04.747687  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:04.788272  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:05.024069  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:05.024535  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:29:05.247498  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:05.287797  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:05.521200  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:05.522479  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:29:05.747728  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:05.788131  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:06.021776  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:06.024397  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:29:06.247627  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:06.288915  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:06.522021  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:06.523673  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:29:06.747624  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:06.787595  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:07.022878  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:29:07.023138  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:07.247384  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:07.288141  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:07.523807  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:07.524396  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:29:07.747794  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:07.787821  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:08.023489  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:29:08.024135  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:08.247310  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:08.287771  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:08.523782  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:29:08.524050  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:08.747362  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:08.788436  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:09.023839  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:09.024243  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:29:09.247390  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:09.287288  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:09.521479  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:09.522960  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:29:09.747047  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:09.787205  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:10.024129  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:10.024795  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:29:10.248395  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:10.288582  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:10.525468  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:10.527174  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:29:10.747478  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:10.788089  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:11.022595  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:11.022933  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:29:11.248100  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:11.288508  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:11.523105  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:29:11.523148  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:11.747391  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:11.788252  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:12.023485  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:29:12.023678  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:12.270521  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:12.303366  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:12.522263  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:12.522307  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:29:12.747376  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:12.787712  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:13.026769  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:29:13.026947  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:13.247324  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:13.287652  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:13.522205  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:13.522416  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:29:13.747819  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:13.787547  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:14.022953  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:29:14.023922  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:14.246932  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:14.288822  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:14.523530  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:29:14.523869  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:14.747033  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:14.787743  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:15.031877  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:29:15.032250  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:15.247090  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:15.287400  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:15.524082  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:29:15.524385  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:15.747659  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:15.848367  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:16.022825  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:29:16.023028  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:16.248004  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:16.287693  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:16.523974  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:16.524067  287189 kapi.go:107] duration metric: took 1m18.505203594s to wait for kubernetes.io/minikube-addons=registry ...
	I1003 18:29:16.746785  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:16.788081  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:17.021843  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:17.251389  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:17.288439  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:17.522316  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:17.747871  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:17.789475  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:18.022498  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:18.248585  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:18.288620  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:18.523490  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:18.747542  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:18.787591  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:19.021717  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:19.252694  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:19.287946  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:19.521767  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:19.746495  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:19.787403  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:20.022013  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:20.247136  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:20.287634  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:20.489073  287189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1003 18:29:20.521991  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:20.746858  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:20.787428  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:21.021438  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:21.247100  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:21.287329  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:21.521832  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:21.575935  287189 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.086825771s)
	W1003 18:29:21.575978  287189 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:29:21.576058  287189 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1003 18:29:21.747146  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:21.787541  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:22.022468  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:22.247580  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:22.348785  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:22.522427  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:22.747644  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:22.788279  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:23.021611  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:23.246819  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:23.288520  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:23.521804  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:23.747187  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:23.788129  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:24.021888  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:24.246933  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:24.288411  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:24.522258  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:24.747605  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:24.788035  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:25.021893  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:25.247353  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:25.287881  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:25.521602  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:25.748799  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:25.787910  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:26.021680  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:26.247697  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:26.288166  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:26.521906  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:26.747080  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:26.787461  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:27.021878  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:27.246773  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:27.287704  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:27.521537  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:27.747747  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:27.787860  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:28.028265  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:28.247464  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:28.301702  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:28.522496  287189 kapi.go:107] duration metric: took 1m30.504588411s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1003 18:29:28.747534  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:28.787697  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:29.334196  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:29.334511  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:29.747260  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:29.787717  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:30.247229  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:30.288013  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:30.747803  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:30.787087  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:31.247211  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:31.287083  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:31.747812  287189 kapi.go:107] duration metric: took 1m30.003954252s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1003 18:29:31.750935  287189 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-952140 cluster.
	I1003 18:29:31.754029  287189 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1003 18:29:31.757342  287189 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1003 18:29:31.787339  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:32.287733  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:32.787869  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:33.287608  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:33.787037  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:34.288117  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:34.787560  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:35.286877  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:35.788011  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:36.288315  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:36.788306  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:37.292152  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:37.794105  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:38.294232  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:38.788565  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:39.293698  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:39.788174  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:40.287870  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:40.787987  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:41.287959  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:41.787864  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:42.287597  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:42.787522  287189 kapi.go:107] duration metric: took 1m44.503650299s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1003 18:29:42.790777  287189 out.go:179] * Enabled addons: registry-creds, nvidia-device-plugin, storage-provisioner, cloud-spanner, ingress-dns, amd-gpu-device-plugin, storage-provisioner-rancher, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1003 18:29:42.793786  287189 addons.go:514] duration metric: took 1m50.907714201s for enable addons: enabled=[registry-creds nvidia-device-plugin storage-provisioner cloud-spanner ingress-dns amd-gpu-device-plugin storage-provisioner-rancher metrics-server yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1003 18:29:42.793867  287189 start.go:246] waiting for cluster config update ...
	I1003 18:29:42.793913  287189 start.go:255] writing updated cluster config ...
	I1003 18:29:42.794250  287189 ssh_runner.go:195] Run: rm -f paused
	I1003 18:29:42.797821  287189 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1003 18:29:42.802183  287189 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-2hhqm" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 18:29:42.808836  287189 pod_ready.go:94] pod "coredns-66bc5c9577-2hhqm" is "Ready"
	I1003 18:29:42.808865  287189 pod_ready.go:86] duration metric: took 6.652804ms for pod "coredns-66bc5c9577-2hhqm" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 18:29:42.810967  287189 pod_ready.go:83] waiting for pod "etcd-addons-952140" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 18:29:42.814940  287189 pod_ready.go:94] pod "etcd-addons-952140" is "Ready"
	I1003 18:29:42.814964  287189 pod_ready.go:86] duration metric: took 3.975951ms for pod "etcd-addons-952140" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 18:29:42.817323  287189 pod_ready.go:83] waiting for pod "kube-apiserver-addons-952140" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 18:29:42.822282  287189 pod_ready.go:94] pod "kube-apiserver-addons-952140" is "Ready"
	I1003 18:29:42.822310  287189 pod_ready.go:86] duration metric: took 4.962943ms for pod "kube-apiserver-addons-952140" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 18:29:42.824755  287189 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-952140" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 18:29:43.203261  287189 pod_ready.go:94] pod "kube-controller-manager-addons-952140" is "Ready"
	I1003 18:29:43.203291  287189 pod_ready.go:86] duration metric: took 378.5099ms for pod "kube-controller-manager-addons-952140" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 18:29:43.403250  287189 pod_ready.go:83] waiting for pod "kube-proxy-5hd7r" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 18:29:43.802480  287189 pod_ready.go:94] pod "kube-proxy-5hd7r" is "Ready"
	I1003 18:29:43.802508  287189 pod_ready.go:86] duration metric: took 399.228533ms for pod "kube-proxy-5hd7r" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 18:29:44.002998  287189 pod_ready.go:83] waiting for pod "kube-scheduler-addons-952140" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 18:29:44.403334  287189 pod_ready.go:94] pod "kube-scheduler-addons-952140" is "Ready"
	I1003 18:29:44.403361  287189 pod_ready.go:86] duration metric: took 400.338076ms for pod "kube-scheduler-addons-952140" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 18:29:44.403373  287189 pod_ready.go:40] duration metric: took 1.605518935s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1003 18:29:44.460224  287189 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1003 18:29:44.463385  287189 out.go:179] * Done! kubectl is now configured to use "addons-952140" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 03 18:32:43 addons-952140 crio[830]: time="2025-10-03T18:32:43.076432675Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b" id=0ef1910a-28c4-4d12-b712-e62b0c756419 name=/runtime.v1.ImageService/PullImage
	Oct 03 18:32:43 addons-952140 crio[830]: time="2025-10-03T18:32:43.07734206Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=841785bb-93b4-41ce-b5e2-a5ea70f53e40 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:32:43 addons-952140 crio[830]: time="2025-10-03T18:32:43.081404632Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=fc3c2a02-7952-4cc9-a338-2861fd296c1c name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:32:43 addons-952140 crio[830]: time="2025-10-03T18:32:43.092421974Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-qbbpk/hello-world-app" id=d1d9e2ad-7bd9-4946-bf3c-22e23e0fc4e8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:32:43 addons-952140 crio[830]: time="2025-10-03T18:32:43.093409057Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:32:43 addons-952140 crio[830]: time="2025-10-03T18:32:43.104386726Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:32:43 addons-952140 crio[830]: time="2025-10-03T18:32:43.104774416Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/e45788814451f16dd490717b2a2b7fdb01d74445b56f97aa3e319f5850171606/merged/etc/passwd: no such file or directory"
	Oct 03 18:32:43 addons-952140 crio[830]: time="2025-10-03T18:32:43.104882441Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/e45788814451f16dd490717b2a2b7fdb01d74445b56f97aa3e319f5850171606/merged/etc/group: no such file or directory"
	Oct 03 18:32:43 addons-952140 crio[830]: time="2025-10-03T18:32:43.105238877Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:32:43 addons-952140 crio[830]: time="2025-10-03T18:32:43.133306122Z" level=info msg="Created container 0586e9f5e2a2c8f295a297441a43fdcd10c9df6319c2c145bcb61cd85d644d5a: default/hello-world-app-5d498dc89-qbbpk/hello-world-app" id=d1d9e2ad-7bd9-4946-bf3c-22e23e0fc4e8 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:32:43 addons-952140 crio[830]: time="2025-10-03T18:32:43.137097177Z" level=info msg="Starting container: 0586e9f5e2a2c8f295a297441a43fdcd10c9df6319c2c145bcb61cd85d644d5a" id=0a40a438-dfe2-4bc0-ab36-edfd06623785 name=/runtime.v1.RuntimeService/StartContainer
	Oct 03 18:32:43 addons-952140 crio[830]: time="2025-10-03T18:32:43.142084101Z" level=info msg="Started container" PID=6959 containerID=0586e9f5e2a2c8f295a297441a43fdcd10c9df6319c2c145bcb61cd85d644d5a description=default/hello-world-app-5d498dc89-qbbpk/hello-world-app id=0a40a438-dfe2-4bc0-ab36-edfd06623785 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8c99402fd7eaa23a781d214e4afc492e99c599f0a637c5350e87de0a7df63913
	Oct 03 18:32:43 addons-952140 crio[830]: time="2025-10-03T18:32:43.248569826Z" level=info msg="Running pod sandbox: kube-system/registry-creds-764b6fb674-dqntl/POD" id=940c20fe-b0b1-4b76-9a6b-c9de69a71d70 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 03 18:32:43 addons-952140 crio[830]: time="2025-10-03T18:32:43.248633976Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:32:43 addons-952140 crio[830]: time="2025-10-03T18:32:43.260160557Z" level=info msg="Got pod network &{Name:registry-creds-764b6fb674-dqntl Namespace:kube-system ID:a7326171ecec16974d8e230205be522ee6a61a9b2261249607a561566c9582de UID:57dce88b-cd6c-4f39-babf-2079e2174e05 NetNS:/var/run/netns/74304fbe-bd55-4ffd-b481-ac6e9f5f33b7 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40028a04b8}] Aliases:map[]}"
	Oct 03 18:32:43 addons-952140 crio[830]: time="2025-10-03T18:32:43.260326971Z" level=info msg="Adding pod kube-system_registry-creds-764b6fb674-dqntl to CNI network \"kindnet\" (type=ptp)"
	Oct 03 18:32:43 addons-952140 crio[830]: time="2025-10-03T18:32:43.278151599Z" level=info msg="Got pod network &{Name:registry-creds-764b6fb674-dqntl Namespace:kube-system ID:a7326171ecec16974d8e230205be522ee6a61a9b2261249607a561566c9582de UID:57dce88b-cd6c-4f39-babf-2079e2174e05 NetNS:/var/run/netns/74304fbe-bd55-4ffd-b481-ac6e9f5f33b7 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40028a04b8}] Aliases:map[]}"
	Oct 03 18:32:43 addons-952140 crio[830]: time="2025-10-03T18:32:43.278427531Z" level=info msg="Checking pod kube-system_registry-creds-764b6fb674-dqntl for CNI network kindnet (type=ptp)"
	Oct 03 18:32:43 addons-952140 crio[830]: time="2025-10-03T18:32:43.281726563Z" level=info msg="Ran pod sandbox a7326171ecec16974d8e230205be522ee6a61a9b2261249607a561566c9582de with infra container: kube-system/registry-creds-764b6fb674-dqntl/POD" id=940c20fe-b0b1-4b76-9a6b-c9de69a71d70 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 03 18:32:43 addons-952140 crio[830]: time="2025-10-03T18:32:43.283356599Z" level=info msg="Checking image status: docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605" id=d432e63f-e88f-4424-96b6-6494fda665b7 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:32:43 addons-952140 crio[830]: time="2025-10-03T18:32:43.283538686Z" level=info msg="Image docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605 not found" id=d432e63f-e88f-4424-96b6-6494fda665b7 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:32:43 addons-952140 crio[830]: time="2025-10-03T18:32:43.283588632Z" level=info msg="Neither image nor artfiact docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605 found" id=d432e63f-e88f-4424-96b6-6494fda665b7 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:32:43 addons-952140 crio[830]: time="2025-10-03T18:32:43.284975656Z" level=info msg="Pulling image: docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605" id=9dcd142c-6d3e-4327-9424-275715f04edb name=/runtime.v1.ImageService/PullImage
	Oct 03 18:32:43 addons-952140 crio[830]: time="2025-10-03T18:32:43.286689051Z" level=info msg="Trying to access \"docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605\""
	Oct 03 18:32:43 addons-952140 crio[830]: time="2025-10-03T18:32:43.507721316Z" level=info msg="Image operating system mismatch: image uses OS \"linux\"+architecture \"amd64\"+\"\", expecting one of \"linux+arm64+\\\"v8\\\", linux+arm64+\\\"\\\"\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	0586e9f5e2a2c       docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b                                        1 second ago        Running             hello-world-app                          0                   8c99402fd7eaa       hello-world-app-5d498dc89-qbbpk            default
	82ea792e7283b       docker.io/library/nginx@sha256:77d740efa8f9c4753f2a7212d8422b8c77411681971f400ea03d07fe38476cac                                              2 minutes ago       Running             nginx                                    0                   f37d318564ad9       nginx                                      default
	b6c3eb481631c       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          2 minutes ago       Running             busybox                                  0                   68836f5fdbebe       busybox                                    default
	a2a54b8525b1b       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          3 minutes ago       Running             csi-snapshotter                          0                   90055626cb73d       csi-hostpathplugin-vsbgb                   kube-system
	764f61b1d1b52       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          3 minutes ago       Running             csi-provisioner                          0                   90055626cb73d       csi-hostpathplugin-vsbgb                   kube-system
	5520f176a27b0       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            3 minutes ago       Running             liveness-probe                           0                   90055626cb73d       csi-hostpathplugin-vsbgb                   kube-system
	a55dd027b4c24       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           3 minutes ago       Running             hostpath                                 0                   90055626cb73d       csi-hostpathplugin-vsbgb                   kube-system
	58f575a0718cd       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:74b72c3673aff7e1fa7c3ebae80b5dbe5446ce1906ef8d4f98d4b9f6e72c88e1                            3 minutes ago       Running             gadget                                   0                   b4dfe4fefc5a8       gadget-8d4lm                               gadget
	d11765424ad97       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                3 minutes ago       Running             node-driver-registrar                    0                   90055626cb73d       csi-hostpathplugin-vsbgb                   kube-system
	1ca7b1012478e       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 3 minutes ago       Running             gcp-auth                                 0                   8f8339c688744       gcp-auth-78565c9fb4-qh9mv                  gcp-auth
	1a1f2d65645ab       registry.k8s.io/ingress-nginx/controller@sha256:f99290cbebde470590890356f061fd429ff3def99cc2dedb1fcd21626c5d73d6                             3 minutes ago       Running             controller                               0                   d2ef79573359b       ingress-nginx-controller-9cc49f96f-dwspc   ingress-nginx
	c019dcc46e8b9       gcr.io/cloud-spanner-emulator/emulator@sha256:77d0cd8103fe32875bbb04c070a7d1db292093b65d11c99c00cf39e8a13852f5                               3 minutes ago       Running             cloud-spanner-emulator                   0                   7d71f40dc2bb7       cloud-spanner-emulator-85f6b7fc65-thvpj    default
	ba5695d849b4f       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        3 minutes ago       Running             metrics-server                           0                   db535e031c153       metrics-server-85b7d694d7-tscmk            kube-system
	351cf9cd8e8f8       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              3 minutes ago       Running             registry-proxy                           0                   2219cf3ff875f       registry-proxy-4nwwr                       kube-system
	c2d0db82bc7f2       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago       Running             volume-snapshot-controller               0                   183f8955b2cb3       snapshot-controller-7d9fbc56b8-k5rg9       kube-system
	5925d6c423d79       nvcr.io/nvidia/k8s-device-plugin@sha256:206d989142113ab71eaf27958a0e0a203f40103cf5b48890f5de80fd1b3fcfde                                     3 minutes ago       Running             nvidia-device-plugin-ctr                 0                   ddd6b9140c114       nvidia-device-plugin-daemonset-84v2d       kube-system
	11cc9a267a159       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              3 minutes ago       Running             yakd                                     0                   7ceb2f5a27e72       yakd-dashboard-5ff678cb9-ccz5v             yakd-dashboard
	228036e3d3021       docker.io/library/registry@sha256:f26c394e5b7c3a707c7373c3e9388e44f0d5bdd3def19652c6bd2ac1a0fa6758                                           3 minutes ago       Running             registry                                 0                   b30d8d5e57e72       registry-66898fdd98-88sgc                  kube-system
	d38c57e36e359       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               3 minutes ago       Running             minikube-ingress-dns                     0                   0b5e358299c34       kube-ingress-dns-minikube                  kube-system
	c8b82f114f8e3       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:73b47a951627d604fcf1cf93ddc15004fe3854f881da22f690854d098255f1c1                   4 minutes ago       Exited              patch                                    0                   d91a9f6ff6792       ingress-nginx-admission-patch-bpnzz        ingress-nginx
	8ab3974a2c302       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   4 minutes ago       Running             csi-external-health-monitor-controller   0                   90055626cb73d       csi-hostpathplugin-vsbgb                   kube-system
	70497b5707570       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      4 minutes ago       Running             volume-snapshot-controller               0                   6a79f287a52e1       snapshot-controller-7d9fbc56b8-ct6ht       kube-system
	26742750260bf       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              4 minutes ago       Running             csi-resizer                              0                   daef42118870d       csi-hostpath-resizer-0                     kube-system
	2bb1e9011f7aa       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             4 minutes ago       Running             local-path-provisioner                   0                   d131b686b6646       local-path-provisioner-648f6765c9-rrkgn    local-path-storage
	7099c81ca982b       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             4 minutes ago       Running             csi-attacher                             0                   4a984a0357c34       csi-hostpath-attacher-0                    kube-system
	036cd246674ae       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:73b47a951627d604fcf1cf93ddc15004fe3854f881da22f690854d098255f1c1                   4 minutes ago       Exited              create                                   0                   076b77d645205       ingress-nginx-admission-create-4r899       ingress-nginx
	2657f869bb852       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             4 minutes ago       Running             coredns                                  0                   f0a958be4f7ed       coredns-66bc5c9577-2hhqm                   kube-system
	82907fef03cc4       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             4 minutes ago       Running             storage-provisioner                      0                   7af83c95d09e5       storage-provisioner                        kube-system
	28257b7548dee       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             4 minutes ago       Running             kube-proxy                               0                   2388d4ea56ec5       kube-proxy-5hd7r                           kube-system
	1a59139ec0fac       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             4 minutes ago       Running             kindnet-cni                              0                   e31991cd5cf89       kindnet-vx5lb                              kube-system
	23bd53ece83d0       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             5 minutes ago       Running             kube-apiserver                           0                   3fc39c1a47af7       kube-apiserver-addons-952140               kube-system
	1cbcaf90a2815       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             5 minutes ago       Running             kube-controller-manager                  0                   d76f4f0c6cff8       kube-controller-manager-addons-952140      kube-system
	22981c6dff74a       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             5 minutes ago       Running             kube-scheduler                           0                   628e6509a3941       kube-scheduler-addons-952140               kube-system
	e937e437e1e79       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             5 minutes ago       Running             etcd                                     0                   82b3b88c261f4       etcd-addons-952140                         kube-system
	
	
	==> coredns [2657f869bb8529138f74b802beedcd922a626ac30c50e54c72731eaff1b930c0] <==
	[INFO] 10.244.0.13:51292 - 40170 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002266458s
	[INFO] 10.244.0.13:51292 - 5930 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000111078s
	[INFO] 10.244.0.13:51292 - 49491 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000163544s
	[INFO] 10.244.0.13:42123 - 41584 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000158727s
	[INFO] 10.244.0.13:42123 - 41346 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000074055s
	[INFO] 10.244.0.13:41541 - 33908 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00009223s
	[INFO] 10.244.0.13:41541 - 33687 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000101379s
	[INFO] 10.244.0.13:38400 - 6582 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000085067s
	[INFO] 10.244.0.13:38400 - 6385 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000061353s
	[INFO] 10.244.0.13:46743 - 59576 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.006130656s
	[INFO] 10.244.0.13:46743 - 59371 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.006163954s
	[INFO] 10.244.0.13:36530 - 48295 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000113745s
	[INFO] 10.244.0.13:36530 - 48123 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000185732s
	[INFO] 10.244.0.20:56295 - 61874 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000176755s
	[INFO] 10.244.0.20:51460 - 16994 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000160557s
	[INFO] 10.244.0.20:41689 - 53547 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000285716s
	[INFO] 10.244.0.20:42517 - 30107 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000325357s
	[INFO] 10.244.0.20:43953 - 60196 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00016873s
	[INFO] 10.244.0.20:41318 - 12340 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000255184s
	[INFO] 10.244.0.20:51545 - 57495 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.004904901s
	[INFO] 10.244.0.20:51299 - 54833 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.004941145s
	[INFO] 10.244.0.20:58955 - 18158 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003360786s
	[INFO] 10.244.0.20:43220 - 1055 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.002768019s
	[INFO] 10.244.0.23:44158 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000147445s
	[INFO] 10.244.0.23:33873 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00015803s
	
	
	==> describe nodes <==
	Name:               addons-952140
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-952140
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a43873c79fc22f8b1ccd29d3dfa635d392b09335
	                    minikube.k8s.io/name=addons-952140
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_03T18_27_47_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-952140
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-952140"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 03 Oct 2025 18:27:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-952140
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 03 Oct 2025 18:32:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 03 Oct 2025 18:32:41 +0000   Fri, 03 Oct 2025 18:27:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 03 Oct 2025 18:32:41 +0000   Fri, 03 Oct 2025 18:27:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 03 Oct 2025 18:32:41 +0000   Fri, 03 Oct 2025 18:27:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 03 Oct 2025 18:32:41 +0000   Fri, 03 Oct 2025 18:28:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-952140
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 05cbf1d28c6b4036a123ffa7870f67eb
	  System UUID:                7f98a991-1761-476a-88c4-95c71c61f734
	  Boot ID:                    3762136e-8bec-4104-a5cb-0b1976f6048e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m59s
	  default                     cloud-spanner-emulator-85f6b7fc65-thvpj     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m49s
	  default                     hello-world-app-5d498dc89-qbbpk             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  gadget                      gadget-8d4lm                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m47s
	  gcp-auth                    gcp-auth-78565c9fb4-qh9mv                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m43s
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-dwspc    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         4m47s
	  kube-system                 coredns-66bc5c9577-2hhqm                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     4m52s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m46s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m46s
	  kube-system                 csi-hostpathplugin-vsbgb                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m12s
	  kube-system                 etcd-addons-952140                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         4m58s
	  kube-system                 kindnet-vx5lb                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      4m53s
	  kube-system                 kube-apiserver-addons-952140                250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m58s
	  kube-system                 kube-controller-manager-addons-952140       200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m58s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m48s
	  kube-system                 kube-proxy-5hd7r                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m53s
	  kube-system                 kube-scheduler-addons-952140                100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m58s
	  kube-system                 metrics-server-85b7d694d7-tscmk             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         4m48s
	  kube-system                 nvidia-device-plugin-daemonset-84v2d        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m12s
	  kube-system                 registry-66898fdd98-88sgc                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m49s
	  kube-system                 registry-creds-764b6fb674-dqntl             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m50s
	  kube-system                 registry-proxy-4nwwr                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m12s
	  kube-system                 snapshot-controller-7d9fbc56b8-ct6ht        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m47s
	  kube-system                 snapshot-controller-7d9fbc56b8-k5rg9        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m47s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m49s
	  local-path-storage          local-path-provisioner-648f6765c9-rrkgn     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m48s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-ccz5v              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     4m47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 4m51s                kube-proxy       
	  Normal   Starting                 5m5s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m5s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m5s (x8 over 5m5s)  kubelet          Node addons-952140 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m5s (x8 over 5m5s)  kubelet          Node addons-952140 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m5s (x8 over 5m5s)  kubelet          Node addons-952140 status is now: NodeHasSufficientPID
	  Normal   Starting                 4m58s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 4m58s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  4m58s                kubelet          Node addons-952140 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m58s                kubelet          Node addons-952140 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m58s                kubelet          Node addons-952140 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m54s                node-controller  Node addons-952140 event: Registered Node addons-952140 in Controller
	  Normal   NodeReady                4m12s                kubelet          Node addons-952140 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 3 17:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.016734] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.507620] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.057770] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.764958] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.639190] kauditd_printk_skb: 36 callbacks suppressed
	[Oct 3 18:16] hrtimer: interrupt took 33359751 ns
	[Oct 3 18:26] kauditd_printk_skb: 8 callbacks suppressed
	[Oct 3 18:27] overlayfs: idmapped layers are currently not supported
	[  +0.053491] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [e937e437e1e79c6bcbb92c82ee9849b6f8ceb2c5980d23b084e27a6fb88ab45a] <==
	{"level":"warn","ts":"2025-10-03T18:27:42.491737Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T18:27:42.514414Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T18:27:42.525454Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T18:27:42.544837Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T18:27:42.558142Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T18:27:42.581981Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T18:27:42.620697Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T18:27:42.657803Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T18:27:42.673648Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T18:27:42.698893Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T18:27:42.720168Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T18:27:42.728243Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T18:27:42.770770Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T18:27:42.773445Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T18:27:42.791492Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T18:27:42.813150Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T18:27:42.850347Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T18:27:42.856773Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T18:27:42.935332Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T18:27:58.752957Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T18:27:58.768294Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T18:28:20.793509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T18:28:20.811206Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T18:28:20.833744Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T18:28:20.849587Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46642","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [1ca7b1012478e6884e729e3e480969ed4ef066a6aa202ed9cc90f7153a4e4320] <==
	2025/10/03 18:29:30 GCP Auth Webhook started!
	2025/10/03 18:29:44 Ready to marshal response ...
	2025/10/03 18:29:44 Ready to write response ...
	2025/10/03 18:29:45 Ready to marshal response ...
	2025/10/03 18:29:45 Ready to write response ...
	2025/10/03 18:29:45 Ready to marshal response ...
	2025/10/03 18:29:45 Ready to write response ...
	2025/10/03 18:30:06 Ready to marshal response ...
	2025/10/03 18:30:06 Ready to write response ...
	2025/10/03 18:30:09 Ready to marshal response ...
	2025/10/03 18:30:09 Ready to write response ...
	2025/10/03 18:30:09 Ready to marshal response ...
	2025/10/03 18:30:09 Ready to write response ...
	2025/10/03 18:30:18 Ready to marshal response ...
	2025/10/03 18:30:18 Ready to write response ...
	2025/10/03 18:30:22 Ready to marshal response ...
	2025/10/03 18:30:22 Ready to write response ...
	2025/10/03 18:30:32 Ready to marshal response ...
	2025/10/03 18:30:32 Ready to write response ...
	2025/10/03 18:30:49 Ready to marshal response ...
	2025/10/03 18:30:49 Ready to write response ...
	2025/10/03 18:32:42 Ready to marshal response ...
	2025/10/03 18:32:42 Ready to write response ...
	
	
	==> kernel <==
	 18:32:44 up  1:15,  0 user,  load average: 1.17, 2.45, 3.32
	Linux addons-952140 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1a59139ec0face1693267071ca3c3ba3e8eff397418ffbf25f3682c68eee244a] <==
	I1003 18:30:42.624440       1 main.go:301] handling current node
	I1003 18:30:52.623786       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1003 18:30:52.623881       1 main.go:301] handling current node
	I1003 18:31:02.624829       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1003 18:31:02.624995       1 main.go:301] handling current node
	I1003 18:31:12.622056       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1003 18:31:12.622091       1 main.go:301] handling current node
	I1003 18:31:22.622357       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1003 18:31:22.622396       1 main.go:301] handling current node
	I1003 18:31:32.626034       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1003 18:31:32.626134       1 main.go:301] handling current node
	I1003 18:31:42.622266       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1003 18:31:42.622301       1 main.go:301] handling current node
	I1003 18:31:52.629479       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1003 18:31:52.629583       1 main.go:301] handling current node
	I1003 18:32:02.626117       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1003 18:32:02.626151       1 main.go:301] handling current node
	I1003 18:32:12.629408       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1003 18:32:12.629445       1 main.go:301] handling current node
	I1003 18:32:22.625950       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1003 18:32:22.626060       1 main.go:301] handling current node
	I1003 18:32:32.626618       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1003 18:32:32.626653       1 main.go:301] handling current node
	I1003 18:32:42.622080       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1003 18:32:42.622112       1 main.go:301] handling current node
	
	
	==> kube-apiserver [23bd53ece83d04d894e5fc60fda04a6f8bdfe8d6c59ffad6c4dcacc168ec4ed8] <==
	W1003 18:28:57.557065       1 handler_proxy.go:99] no RequestInfo found in the context
	E1003 18:28:57.557152       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1003 18:28:57.557178       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1003 18:28:57.558341       1 handler_proxy.go:99] no RequestInfo found in the context
	E1003 18:28:57.558379       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1003 18:28:57.558392       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1003 18:29:19.271382       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.168.131:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.168.131:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.168.131:443: connect: connection refused" logger="UnhandledError"
	W1003 18:29:19.271473       1 handler_proxy.go:99] no RequestInfo found in the context
	E1003 18:29:19.271587       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1003 18:29:19.271925       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.168.131:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.168.131:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.168.131:443: connect: connection refused" logger="UnhandledError"
	E1003 18:29:19.277961       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.168.131:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.168.131:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.168.131:443: connect: connection refused" logger="UnhandledError"
	E1003 18:29:19.298919       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.168.131:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.168.131:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.168.131:443: connect: connection refused" logger="UnhandledError"
	I1003 18:29:19.452985       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1003 18:29:53.842366       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:42378: use of closed network connection
	E1003 18:29:53.969474       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:42396: use of closed network connection
	I1003 18:30:22.011334       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1003 18:30:22.322027       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.98.23.179"}
	I1003 18:30:44.968022       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1003 18:32:42.322762       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.108.163.21"}
	
	
	==> kube-controller-manager [1cbcaf90a28158f2a4d5495c4b92561650195912704daec05dcf1d9b56429e5c] <==
	I1003 18:27:50.803802       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1003 18:27:50.803855       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1003 18:27:50.803897       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1003 18:27:50.804389       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1003 18:27:50.804420       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1003 18:27:50.804666       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1003 18:27:50.804714       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1003 18:27:50.808488       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1003 18:27:50.808519       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1003 18:27:50.808528       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1003 18:27:50.809352       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1003 18:27:50.810133       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1003 18:27:50.810144       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1003 18:27:50.834434       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-952140" podCIDRs=["10.244.0.0/24"]
	E1003 18:27:56.735760       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1003 18:28:20.785627       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1003 18:28:20.785793       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1003 18:28:20.785834       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1003 18:28:20.812643       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1003 18:28:20.821990       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1003 18:28:20.886395       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1003 18:28:20.923064       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1003 18:28:35.758898       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1003 18:28:50.891943       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1003 18:28:50.930661       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [28257b7548dee5496025c494fc69f7d27b158c004459fe9cf7e145244cc402b4] <==
	I1003 18:27:52.792976       1 server_linux.go:53] "Using iptables proxy"
	I1003 18:27:52.874495       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1003 18:27:52.977101       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1003 18:27:52.977304       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1003 18:27:52.977409       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1003 18:27:53.017887       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1003 18:27:53.017939       1 server_linux.go:132] "Using iptables Proxier"
	I1003 18:27:53.026366       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1003 18:27:53.026724       1 server.go:527] "Version info" version="v1.34.1"
	I1003 18:27:53.026738       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1003 18:27:53.028165       1 config.go:200] "Starting service config controller"
	I1003 18:27:53.028175       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1003 18:27:53.028192       1 config.go:106] "Starting endpoint slice config controller"
	I1003 18:27:53.028197       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1003 18:27:53.028207       1 config.go:403] "Starting serviceCIDR config controller"
	I1003 18:27:53.028210       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1003 18:27:53.034455       1 config.go:309] "Starting node config controller"
	I1003 18:27:53.034472       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1003 18:27:53.034480       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1003 18:27:53.128846       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1003 18:27:53.128880       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1003 18:27:53.128918       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [22981c6dff74a1d10571b76dae9b7bbbb33ca3843ab35927e1e5997100c5be1c] <==
	I1003 18:27:44.049017       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1003 18:27:44.054565       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1003 18:27:44.055901       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1003 18:27:44.067222       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1003 18:27:44.067471       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1003 18:27:44.067663       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1003 18:27:44.067941       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1003 18:27:44.068054       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1003 18:27:44.068298       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1003 18:27:44.069724       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1003 18:27:44.069857       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1003 18:27:44.069967       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1003 18:27:44.070062       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1003 18:27:44.070172       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1003 18:27:44.070280       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1003 18:27:44.070382       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1003 18:27:44.070921       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1003 18:27:44.071089       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1003 18:27:44.071205       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1003 18:27:44.071269       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1003 18:27:44.923454       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1003 18:27:44.967180       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1003 18:27:45.043147       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1003 18:27:45.088399       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I1003 18:27:47.448458       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 03 18:30:57 addons-952140 kubelet[1298]: I1003 18:30:57.702140    1298 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/6a94ea8c-c951-4508-9644-092ebe111a17-gcp-creds\") pod \"6a94ea8c-c951-4508-9644-092ebe111a17\" (UID: \"6a94ea8c-c951-4508-9644-092ebe111a17\") "
	Oct 03 18:30:57 addons-952140 kubelet[1298]: I1003 18:30:57.702212    1298 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a94ea8c-c951-4508-9644-092ebe111a17-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "6a94ea8c-c951-4508-9644-092ebe111a17" (UID: "6a94ea8c-c951-4508-9644-092ebe111a17"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Oct 03 18:30:57 addons-952140 kubelet[1298]: I1003 18:30:57.702793    1298 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"task-pv-storage\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^150889ee-a087-11f0-8c5e-bec868befd11\") pod \"6a94ea8c-c951-4508-9644-092ebe111a17\" (UID: \"6a94ea8c-c951-4508-9644-092ebe111a17\") "
	Oct 03 18:30:57 addons-952140 kubelet[1298]: I1003 18:30:57.702844    1298 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gvcq5\" (UniqueName: \"kubernetes.io/projected/6a94ea8c-c951-4508-9644-092ebe111a17-kube-api-access-gvcq5\") pod \"6a94ea8c-c951-4508-9644-092ebe111a17\" (UID: \"6a94ea8c-c951-4508-9644-092ebe111a17\") "
	Oct 03 18:30:57 addons-952140 kubelet[1298]: I1003 18:30:57.703005    1298 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/6a94ea8c-c951-4508-9644-092ebe111a17-gcp-creds\") on node \"addons-952140\" DevicePath \"\""
	Oct 03 18:30:57 addons-952140 kubelet[1298]: I1003 18:30:57.705199    1298 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a94ea8c-c951-4508-9644-092ebe111a17-kube-api-access-gvcq5" (OuterVolumeSpecName: "kube-api-access-gvcq5") pod "6a94ea8c-c951-4508-9644-092ebe111a17" (UID: "6a94ea8c-c951-4508-9644-092ebe111a17"). InnerVolumeSpecName "kube-api-access-gvcq5". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 03 18:30:57 addons-952140 kubelet[1298]: I1003 18:30:57.707484    1298 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^150889ee-a087-11f0-8c5e-bec868befd11" (OuterVolumeSpecName: "task-pv-storage") pod "6a94ea8c-c951-4508-9644-092ebe111a17" (UID: "6a94ea8c-c951-4508-9644-092ebe111a17"). InnerVolumeSpecName "pvc-09f8eb9b-5bea-4920-880b-179bc0e0c2f3". PluginName "kubernetes.io/csi", VolumeGIDValue ""
	Oct 03 18:30:57 addons-952140 kubelet[1298]: I1003 18:30:57.724914    1298 scope.go:117] "RemoveContainer" containerID="6fcd74afa3c645a8427adcb2dfd15532d89c235a7c005d8df5b5684f48b7ba1e"
	Oct 03 18:30:57 addons-952140 kubelet[1298]: I1003 18:30:57.733832    1298 scope.go:117] "RemoveContainer" containerID="6fcd74afa3c645a8427adcb2dfd15532d89c235a7c005d8df5b5684f48b7ba1e"
	Oct 03 18:30:57 addons-952140 kubelet[1298]: E1003 18:30:57.737021    1298 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fcd74afa3c645a8427adcb2dfd15532d89c235a7c005d8df5b5684f48b7ba1e\": container with ID starting with 6fcd74afa3c645a8427adcb2dfd15532d89c235a7c005d8df5b5684f48b7ba1e not found: ID does not exist" containerID="6fcd74afa3c645a8427adcb2dfd15532d89c235a7c005d8df5b5684f48b7ba1e"
	Oct 03 18:30:57 addons-952140 kubelet[1298]: I1003 18:30:57.738599    1298 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fcd74afa3c645a8427adcb2dfd15532d89c235a7c005d8df5b5684f48b7ba1e"} err="failed to get container status \"6fcd74afa3c645a8427adcb2dfd15532d89c235a7c005d8df5b5684f48b7ba1e\": rpc error: code = NotFound desc = could not find container \"6fcd74afa3c645a8427adcb2dfd15532d89c235a7c005d8df5b5684f48b7ba1e\": container with ID starting with 6fcd74afa3c645a8427adcb2dfd15532d89c235a7c005d8df5b5684f48b7ba1e not found: ID does not exist"
	Oct 03 18:30:57 addons-952140 kubelet[1298]: I1003 18:30:57.803346    1298 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-09f8eb9b-5bea-4920-880b-179bc0e0c2f3\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^150889ee-a087-11f0-8c5e-bec868befd11\") on node \"addons-952140\" "
	Oct 03 18:30:57 addons-952140 kubelet[1298]: I1003 18:30:57.803522    1298 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gvcq5\" (UniqueName: \"kubernetes.io/projected/6a94ea8c-c951-4508-9644-092ebe111a17-kube-api-access-gvcq5\") on node \"addons-952140\" DevicePath \"\""
	Oct 03 18:30:57 addons-952140 kubelet[1298]: I1003 18:30:57.813745    1298 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-09f8eb9b-5bea-4920-880b-179bc0e0c2f3" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^150889ee-a087-11f0-8c5e-bec868befd11") on node "addons-952140"
	Oct 03 18:30:57 addons-952140 kubelet[1298]: I1003 18:30:57.904924    1298 reconciler_common.go:299] "Volume detached for volume \"pvc-09f8eb9b-5bea-4920-880b-179bc0e0c2f3\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^150889ee-a087-11f0-8c5e-bec868befd11\") on node \"addons-952140\" DevicePath \"\""
	Oct 03 18:30:58 addons-952140 kubelet[1298]: I1003 18:30:58.540437    1298 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a94ea8c-c951-4508-9644-092ebe111a17" path="/var/lib/kubelet/pods/6a94ea8c-c951-4508-9644-092ebe111a17/volumes"
	Oct 03 18:31:30 addons-952140 kubelet[1298]: I1003 18:31:30.536800    1298 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66898fdd98-88sgc" secret="" err="secret \"gcp-auth\" not found"
	Oct 03 18:32:08 addons-952140 kubelet[1298]: I1003 18:32:08.536791    1298 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-4nwwr" secret="" err="secret \"gcp-auth\" not found"
	Oct 03 18:32:12 addons-952140 kubelet[1298]: I1003 18:32:12.535950    1298 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-84v2d" secret="" err="secret \"gcp-auth\" not found"
	Oct 03 18:32:42 addons-952140 kubelet[1298]: I1003 18:32:42.205110    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/bfb759ae-32e9-44d1-b457-ebdbb1f3486f-gcp-creds\") pod \"hello-world-app-5d498dc89-qbbpk\" (UID: \"bfb759ae-32e9-44d1-b457-ebdbb1f3486f\") " pod="default/hello-world-app-5d498dc89-qbbpk"
	Oct 03 18:32:42 addons-952140 kubelet[1298]: I1003 18:32:42.205830    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqn47\" (UniqueName: \"kubernetes.io/projected/bfb759ae-32e9-44d1-b457-ebdbb1f3486f-kube-api-access-nqn47\") pod \"hello-world-app-5d498dc89-qbbpk\" (UID: \"bfb759ae-32e9-44d1-b457-ebdbb1f3486f\") " pod="default/hello-world-app-5d498dc89-qbbpk"
	Oct 03 18:32:42 addons-952140 kubelet[1298]: W1003 18:32:42.434652    1298 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/85b69962c0dc4c2d215c8870f97829566d7c577f428241564d0dd056e84304a6/crio-8c99402fd7eaa23a781d214e4afc492e99c599f0a637c5350e87de0a7df63913 WatchSource:0}: Error finding container 8c99402fd7eaa23a781d214e4afc492e99c599f0a637c5350e87de0a7df63913: Status 404 returned error can't find the container with id 8c99402fd7eaa23a781d214e4afc492e99c599f0a637c5350e87de0a7df63913
	Oct 03 18:32:43 addons-952140 kubelet[1298]: I1003 18:32:43.242421    1298 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-dqntl" secret="" err="secret \"gcp-auth\" not found"
	Oct 03 18:32:43 addons-952140 kubelet[1298]: W1003 18:32:43.280541    1298 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/85b69962c0dc4c2d215c8870f97829566d7c577f428241564d0dd056e84304a6/crio-a7326171ecec16974d8e230205be522ee6a61a9b2261249607a561566c9582de WatchSource:0}: Error finding container a7326171ecec16974d8e230205be522ee6a61a9b2261249607a561566c9582de: Status 404 returned error can't find the container with id a7326171ecec16974d8e230205be522ee6a61a9b2261249607a561566c9582de
	Oct 03 18:32:44 addons-952140 kubelet[1298]: I1003 18:32:44.143924    1298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-5d498dc89-qbbpk" podStartSLOduration=1.502323098 podStartE2EDuration="2.143906475s" podCreationTimestamp="2025-10-03 18:32:42 +0000 UTC" firstStartedPulling="2025-10-03 18:32:42.436779866 +0000 UTC m=+296.037211125" lastFinishedPulling="2025-10-03 18:32:43.078363252 +0000 UTC m=+296.678794502" observedRunningTime="2025-10-03 18:32:44.142833736 +0000 UTC m=+297.743264987" watchObservedRunningTime="2025-10-03 18:32:44.143906475 +0000 UTC m=+297.744337726"
	
	
	==> storage-provisioner [82907fef03cc43b849878194de7aef8c729ee89dcf5fddba29650a239ab81e90] <==
	W1003 18:32:19.350281       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:32:21.353916       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:32:21.358659       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:32:23.361521       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:32:23.366575       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:32:25.370477       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:32:25.377423       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:32:27.380813       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:32:27.385592       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:32:29.388699       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:32:29.393607       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:32:31.396078       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:32:31.402797       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:32:33.405689       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:32:33.410199       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:32:35.414484       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:32:35.419203       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:32:37.422615       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:32:37.429618       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:32:39.432410       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:32:39.437009       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:32:41.440820       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:32:41.449006       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:32:43.452797       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:32:43.458208       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-952140 -n addons-952140
helpers_test.go:269: (dbg) Run:  kubectl --context addons-952140 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-4r899 ingress-nginx-admission-patch-bpnzz registry-creds-764b6fb674-dqntl
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-952140 describe pod ingress-nginx-admission-create-4r899 ingress-nginx-admission-patch-bpnzz registry-creds-764b6fb674-dqntl
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-952140 describe pod ingress-nginx-admission-create-4r899 ingress-nginx-admission-patch-bpnzz registry-creds-764b6fb674-dqntl: exit status 1 (86.096365ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-4r899" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-bpnzz" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-dqntl" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-952140 describe pod ingress-nginx-admission-create-4r899 ingress-nginx-admission-patch-bpnzz registry-creds-764b6fb674-dqntl: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-952140 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-952140 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (283.546075ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 18:32:46.034230  296620 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:32:46.035040  296620 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:32:46.035063  296620 out.go:374] Setting ErrFile to fd 2...
	I1003 18:32:46.035069  296620 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:32:46.035355  296620 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-284583/.minikube/bin
	I1003 18:32:46.035696  296620 mustload.go:65] Loading cluster: addons-952140
	I1003 18:32:46.036099  296620 config.go:182] Loaded profile config "addons-952140": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:32:46.036119  296620 addons.go:606] checking whether the cluster is paused
	I1003 18:32:46.036228  296620 config.go:182] Loaded profile config "addons-952140": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:32:46.036244  296620 host.go:66] Checking if "addons-952140" exists ...
	I1003 18:32:46.036696  296620 cli_runner.go:164] Run: docker container inspect addons-952140 --format={{.State.Status}}
	I1003 18:32:46.060349  296620 ssh_runner.go:195] Run: systemctl --version
	I1003 18:32:46.060426  296620 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952140
	I1003 18:32:46.079089  296620 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/addons-952140/id_rsa Username:docker}
	I1003 18:32:46.176663  296620 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1003 18:32:46.176764  296620 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1003 18:32:46.233621  296620 cri.go:89] found id: "6bcae3b33a1f5b203ab2ffa7439a0e4bbe10e5e51952f778111d1dbd040bcd39"
	I1003 18:32:46.233641  296620 cri.go:89] found id: "1e2fad159f19d08e6df56ea27e62993a1f0a13c3f1b56133e1e4ebdbaf802e0b"
	I1003 18:32:46.233645  296620 cri.go:89] found id: "a2a54b8525b1b03c3294a286e260174ebeb999c736e2e29750632824346e2b8a"
	I1003 18:32:46.233650  296620 cri.go:89] found id: "764f61b1d1b52574dff121ce3057ed9a2791b059752cb80d76e6a5ae323e3765"
	I1003 18:32:46.233653  296620 cri.go:89] found id: "5520f176a27b0060104f01c653743a97419cd7df90b959dc02f4359563db372f"
	I1003 18:32:46.233657  296620 cri.go:89] found id: "a55dd027b4c2417a4e857716af2ec80adf3ee359efc1fcdea96ae017da8094db"
	I1003 18:32:46.233660  296620 cri.go:89] found id: "d11765424ad977c42ad7828e106df59281b6041a6b85d34d604738d051cc2257"
	I1003 18:32:46.233663  296620 cri.go:89] found id: "ba5695d849b4ff437b5c5a4c73351652ea5b855eb0061d3826ad4a2a76513650"
	I1003 18:32:46.233666  296620 cri.go:89] found id: "351cf9cd8e8f80a1ce058ad47867cc1e9e314f2100ba10ef01326c91fbea576c"
	I1003 18:32:46.233676  296620 cri.go:89] found id: "c2d0db82bc7f2bcfc4af04f3633a094c0e554392449fbf12a24ed377b92f941b"
	I1003 18:32:46.233679  296620 cri.go:89] found id: "5925d6c423d79839f9eb8870977fb293e3c6b1ece77aa59bf7c2a4b120ca3ad3"
	I1003 18:32:46.233682  296620 cri.go:89] found id: "228036e3d30218b16026d557d3264fc361f0c7c42c143fc93a96fd7945d8bdf3"
	I1003 18:32:46.233685  296620 cri.go:89] found id: "d38c57e36e3594ef4f8f3d28db24890c659027ed75977701aa969ce142c27e0e"
	I1003 18:32:46.233688  296620 cri.go:89] found id: "8ab3974a2c302b83e53bc5a243fae87bdec8ed1ca2da979ebcc29dabb8f30fc4"
	I1003 18:32:46.233691  296620 cri.go:89] found id: "70497b5707570324a85bde79dadf41e8e6ded9bd45545ee1a7756ba32eed86d6"
	I1003 18:32:46.233696  296620 cri.go:89] found id: "26742750260bfb48e7909f410307ee53b3dafe6b84bb3a467c505e24d28d4fe1"
	I1003 18:32:46.233699  296620 cri.go:89] found id: "7099c81ca982b78bfa4dd5784e69f027f40fb02b99bce69ec1f792090be6a50b"
	I1003 18:32:46.233704  296620 cri.go:89] found id: "2657f869bb8529138f74b802beedcd922a626ac30c50e54c72731eaff1b930c0"
	I1003 18:32:46.233707  296620 cri.go:89] found id: "82907fef03cc43b849878194de7aef8c729ee89dcf5fddba29650a239ab81e90"
	I1003 18:32:46.233710  296620 cri.go:89] found id: "28257b7548dee5496025c494fc69f7d27b158c004459fe9cf7e145244cc402b4"
	I1003 18:32:46.233714  296620 cri.go:89] found id: "1a59139ec0face1693267071ca3c3ba3e8eff397418ffbf25f3682c68eee244a"
	I1003 18:32:46.233718  296620 cri.go:89] found id: "23bd53ece83d04d894e5fc60fda04a6f8bdfe8d6c59ffad6c4dcacc168ec4ed8"
	I1003 18:32:46.233721  296620 cri.go:89] found id: "1cbcaf90a28158f2a4d5495c4b92561650195912704daec05dcf1d9b56429e5c"
	I1003 18:32:46.233726  296620 cri.go:89] found id: "22981c6dff74a1d10571b76dae9b7bbbb33ca3843ab35927e1e5997100c5be1c"
	I1003 18:32:46.233732  296620 cri.go:89] found id: "e937e437e1e79c6bcbb92c82ee9849b6f8ceb2c5980d23b084e27a6fb88ab45a"
	I1003 18:32:46.233735  296620 cri.go:89] found id: ""
	I1003 18:32:46.233786  296620 ssh_runner.go:195] Run: sudo runc list -f json
	I1003 18:32:46.249504  296620 out.go:203] 
	W1003 18:32:46.252504  296620 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-03T18:32:46Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-03T18:32:46Z" level=error msg="open /run/runc: no such file or directory"
	
	W1003 18:32:46.252533  296620 out.go:285] * 
	* 
	W1003 18:32:46.259184  296620 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 18:32:46.262362  296620 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-arm64 -p addons-952140 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-952140 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-952140 addons disable ingress --alsologtostderr -v=1: exit status 11 (264.030808ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 18:32:46.318427  296672 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:32:46.319309  296672 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:32:46.319328  296672 out.go:374] Setting ErrFile to fd 2...
	I1003 18:32:46.319354  296672 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:32:46.319707  296672 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-284583/.minikube/bin
	I1003 18:32:46.320065  296672 mustload.go:65] Loading cluster: addons-952140
	I1003 18:32:46.320462  296672 config.go:182] Loaded profile config "addons-952140": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:32:46.320476  296672 addons.go:606] checking whether the cluster is paused
	I1003 18:32:46.320623  296672 config.go:182] Loaded profile config "addons-952140": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:32:46.320636  296672 host.go:66] Checking if "addons-952140" exists ...
	I1003 18:32:46.321259  296672 cli_runner.go:164] Run: docker container inspect addons-952140 --format={{.State.Status}}
	I1003 18:32:46.339579  296672 ssh_runner.go:195] Run: systemctl --version
	I1003 18:32:46.339634  296672 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952140
	I1003 18:32:46.358886  296672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/addons-952140/id_rsa Username:docker}
	I1003 18:32:46.462216  296672 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1003 18:32:46.462314  296672 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1003 18:32:46.497846  296672 cri.go:89] found id: "6bcae3b33a1f5b203ab2ffa7439a0e4bbe10e5e51952f778111d1dbd040bcd39"
	I1003 18:32:46.497871  296672 cri.go:89] found id: "1e2fad159f19d08e6df56ea27e62993a1f0a13c3f1b56133e1e4ebdbaf802e0b"
	I1003 18:32:46.497876  296672 cri.go:89] found id: "a2a54b8525b1b03c3294a286e260174ebeb999c736e2e29750632824346e2b8a"
	I1003 18:32:46.497880  296672 cri.go:89] found id: "764f61b1d1b52574dff121ce3057ed9a2791b059752cb80d76e6a5ae323e3765"
	I1003 18:32:46.497884  296672 cri.go:89] found id: "5520f176a27b0060104f01c653743a97419cd7df90b959dc02f4359563db372f"
	I1003 18:32:46.497888  296672 cri.go:89] found id: "a55dd027b4c2417a4e857716af2ec80adf3ee359efc1fcdea96ae017da8094db"
	I1003 18:32:46.497892  296672 cri.go:89] found id: "d11765424ad977c42ad7828e106df59281b6041a6b85d34d604738d051cc2257"
	I1003 18:32:46.497895  296672 cri.go:89] found id: "ba5695d849b4ff437b5c5a4c73351652ea5b855eb0061d3826ad4a2a76513650"
	I1003 18:32:46.497907  296672 cri.go:89] found id: "351cf9cd8e8f80a1ce058ad47867cc1e9e314f2100ba10ef01326c91fbea576c"
	I1003 18:32:46.497914  296672 cri.go:89] found id: "c2d0db82bc7f2bcfc4af04f3633a094c0e554392449fbf12a24ed377b92f941b"
	I1003 18:32:46.497918  296672 cri.go:89] found id: "5925d6c423d79839f9eb8870977fb293e3c6b1ece77aa59bf7c2a4b120ca3ad3"
	I1003 18:32:46.497921  296672 cri.go:89] found id: "228036e3d30218b16026d557d3264fc361f0c7c42c143fc93a96fd7945d8bdf3"
	I1003 18:32:46.497924  296672 cri.go:89] found id: "d38c57e36e3594ef4f8f3d28db24890c659027ed75977701aa969ce142c27e0e"
	I1003 18:32:46.497928  296672 cri.go:89] found id: "8ab3974a2c302b83e53bc5a243fae87bdec8ed1ca2da979ebcc29dabb8f30fc4"
	I1003 18:32:46.497933  296672 cri.go:89] found id: "70497b5707570324a85bde79dadf41e8e6ded9bd45545ee1a7756ba32eed86d6"
	I1003 18:32:46.497946  296672 cri.go:89] found id: "26742750260bfb48e7909f410307ee53b3dafe6b84bb3a467c505e24d28d4fe1"
	I1003 18:32:46.497953  296672 cri.go:89] found id: "7099c81ca982b78bfa4dd5784e69f027f40fb02b99bce69ec1f792090be6a50b"
	I1003 18:32:46.497959  296672 cri.go:89] found id: "2657f869bb8529138f74b802beedcd922a626ac30c50e54c72731eaff1b930c0"
	I1003 18:32:46.497962  296672 cri.go:89] found id: "82907fef03cc43b849878194de7aef8c729ee89dcf5fddba29650a239ab81e90"
	I1003 18:32:46.497965  296672 cri.go:89] found id: "28257b7548dee5496025c494fc69f7d27b158c004459fe9cf7e145244cc402b4"
	I1003 18:32:46.497970  296672 cri.go:89] found id: "1a59139ec0face1693267071ca3c3ba3e8eff397418ffbf25f3682c68eee244a"
	I1003 18:32:46.497973  296672 cri.go:89] found id: "23bd53ece83d04d894e5fc60fda04a6f8bdfe8d6c59ffad6c4dcacc168ec4ed8"
	I1003 18:32:46.497983  296672 cri.go:89] found id: "1cbcaf90a28158f2a4d5495c4b92561650195912704daec05dcf1d9b56429e5c"
	I1003 18:32:46.497991  296672 cri.go:89] found id: "22981c6dff74a1d10571b76dae9b7bbbb33ca3843ab35927e1e5997100c5be1c"
	I1003 18:32:46.497995  296672 cri.go:89] found id: "e937e437e1e79c6bcbb92c82ee9849b6f8ceb2c5980d23b084e27a6fb88ab45a"
	I1003 18:32:46.497998  296672 cri.go:89] found id: ""
	I1003 18:32:46.498062  296672 ssh_runner.go:195] Run: sudo runc list -f json
	I1003 18:32:46.513911  296672 out.go:203] 
	W1003 18:32:46.517040  296672 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-03T18:32:46Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-03T18:32:46Z" level=error msg="open /run/runc: no such file or directory"
	
	W1003 18:32:46.517116  296672 out.go:285] * 
	* 
	W1003 18:32:46.523421  296672 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 18:32:46.526354  296672 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-arm64 -p addons-952140 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (144.90s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.26s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-8d4lm" [82c61ff8-40a3-4cd7-b69f-816320e1e6f9] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003056572s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-952140 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-952140 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (257.863955ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 18:31:04.798079  295481 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:31:04.798782  295481 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:31:04.798794  295481 out.go:374] Setting ErrFile to fd 2...
	I1003 18:31:04.798799  295481 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:31:04.799061  295481 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-284583/.minikube/bin
	I1003 18:31:04.799336  295481 mustload.go:65] Loading cluster: addons-952140
	I1003 18:31:04.799705  295481 config.go:182] Loaded profile config "addons-952140": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:31:04.799723  295481 addons.go:606] checking whether the cluster is paused
	I1003 18:31:04.799828  295481 config.go:182] Loaded profile config "addons-952140": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:31:04.799844  295481 host.go:66] Checking if "addons-952140" exists ...
	I1003 18:31:04.800281  295481 cli_runner.go:164] Run: docker container inspect addons-952140 --format={{.State.Status}}
	I1003 18:31:04.817491  295481 ssh_runner.go:195] Run: systemctl --version
	I1003 18:31:04.817552  295481 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952140
	I1003 18:31:04.835942  295481 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/addons-952140/id_rsa Username:docker}
	I1003 18:31:04.931271  295481 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1003 18:31:04.931367  295481 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1003 18:31:04.964526  295481 cri.go:89] found id: "a2a54b8525b1b03c3294a286e260174ebeb999c736e2e29750632824346e2b8a"
	I1003 18:31:04.964546  295481 cri.go:89] found id: "764f61b1d1b52574dff121ce3057ed9a2791b059752cb80d76e6a5ae323e3765"
	I1003 18:31:04.964551  295481 cri.go:89] found id: "5520f176a27b0060104f01c653743a97419cd7df90b959dc02f4359563db372f"
	I1003 18:31:04.964555  295481 cri.go:89] found id: "a55dd027b4c2417a4e857716af2ec80adf3ee359efc1fcdea96ae017da8094db"
	I1003 18:31:04.964559  295481 cri.go:89] found id: "d11765424ad977c42ad7828e106df59281b6041a6b85d34d604738d051cc2257"
	I1003 18:31:04.964562  295481 cri.go:89] found id: "ba5695d849b4ff437b5c5a4c73351652ea5b855eb0061d3826ad4a2a76513650"
	I1003 18:31:04.964565  295481 cri.go:89] found id: "351cf9cd8e8f80a1ce058ad47867cc1e9e314f2100ba10ef01326c91fbea576c"
	I1003 18:31:04.964568  295481 cri.go:89] found id: "c2d0db82bc7f2bcfc4af04f3633a094c0e554392449fbf12a24ed377b92f941b"
	I1003 18:31:04.964571  295481 cri.go:89] found id: "5925d6c423d79839f9eb8870977fb293e3c6b1ece77aa59bf7c2a4b120ca3ad3"
	I1003 18:31:04.964577  295481 cri.go:89] found id: "228036e3d30218b16026d557d3264fc361f0c7c42c143fc93a96fd7945d8bdf3"
	I1003 18:31:04.964581  295481 cri.go:89] found id: "d38c57e36e3594ef4f8f3d28db24890c659027ed75977701aa969ce142c27e0e"
	I1003 18:31:04.964584  295481 cri.go:89] found id: "8ab3974a2c302b83e53bc5a243fae87bdec8ed1ca2da979ebcc29dabb8f30fc4"
	I1003 18:31:04.964587  295481 cri.go:89] found id: "70497b5707570324a85bde79dadf41e8e6ded9bd45545ee1a7756ba32eed86d6"
	I1003 18:31:04.964590  295481 cri.go:89] found id: "26742750260bfb48e7909f410307ee53b3dafe6b84bb3a467c505e24d28d4fe1"
	I1003 18:31:04.964593  295481 cri.go:89] found id: "7099c81ca982b78bfa4dd5784e69f027f40fb02b99bce69ec1f792090be6a50b"
	I1003 18:31:04.964603  295481 cri.go:89] found id: "2657f869bb8529138f74b802beedcd922a626ac30c50e54c72731eaff1b930c0"
	I1003 18:31:04.964606  295481 cri.go:89] found id: "82907fef03cc43b849878194de7aef8c729ee89dcf5fddba29650a239ab81e90"
	I1003 18:31:04.964611  295481 cri.go:89] found id: "28257b7548dee5496025c494fc69f7d27b158c004459fe9cf7e145244cc402b4"
	I1003 18:31:04.964615  295481 cri.go:89] found id: "1a59139ec0face1693267071ca3c3ba3e8eff397418ffbf25f3682c68eee244a"
	I1003 18:31:04.964618  295481 cri.go:89] found id: "23bd53ece83d04d894e5fc60fda04a6f8bdfe8d6c59ffad6c4dcacc168ec4ed8"
	I1003 18:31:04.964622  295481 cri.go:89] found id: "1cbcaf90a28158f2a4d5495c4b92561650195912704daec05dcf1d9b56429e5c"
	I1003 18:31:04.964626  295481 cri.go:89] found id: "22981c6dff74a1d10571b76dae9b7bbbb33ca3843ab35927e1e5997100c5be1c"
	I1003 18:31:04.964628  295481 cri.go:89] found id: "e937e437e1e79c6bcbb92c82ee9849b6f8ceb2c5980d23b084e27a6fb88ab45a"
	I1003 18:31:04.964631  295481 cri.go:89] found id: ""
	I1003 18:31:04.964680  295481 ssh_runner.go:195] Run: sudo runc list -f json
	I1003 18:31:04.980343  295481 out.go:203] 
	W1003 18:31:04.983218  295481 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-03T18:31:04Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-03T18:31:04Z" level=error msg="open /run/runc: no such file or directory"
	
	W1003 18:31:04.983243  295481 out.go:285] * 
	* 
	W1003 18:31:04.989658  295481 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 18:31:04.995645  295481 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-arm64 -p addons-952140 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (6.26s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.36s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 5.004265ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-tscmk" [51883ecf-f53c-4001-af25-5785ed3fa7db] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.00415513s
addons_test.go:463: (dbg) Run:  kubectl --context addons-952140 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-952140 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-952140 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (265.43737ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 18:30:21.418208  294422 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:30:21.419535  294422 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:30:21.419553  294422 out.go:374] Setting ErrFile to fd 2...
	I1003 18:30:21.419560  294422 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:30:21.419852  294422 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-284583/.minikube/bin
	I1003 18:30:21.420167  294422 mustload.go:65] Loading cluster: addons-952140
	I1003 18:30:21.420574  294422 config.go:182] Loaded profile config "addons-952140": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:30:21.420594  294422 addons.go:606] checking whether the cluster is paused
	I1003 18:30:21.420699  294422 config.go:182] Loaded profile config "addons-952140": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:30:21.420715  294422 host.go:66] Checking if "addons-952140" exists ...
	I1003 18:30:21.421212  294422 cli_runner.go:164] Run: docker container inspect addons-952140 --format={{.State.Status}}
	I1003 18:30:21.439684  294422 ssh_runner.go:195] Run: systemctl --version
	I1003 18:30:21.439735  294422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952140
	I1003 18:30:21.469250  294422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/addons-952140/id_rsa Username:docker}
	I1003 18:30:21.563197  294422 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1003 18:30:21.563294  294422 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1003 18:30:21.596718  294422 cri.go:89] found id: "a2a54b8525b1b03c3294a286e260174ebeb999c736e2e29750632824346e2b8a"
	I1003 18:30:21.596755  294422 cri.go:89] found id: "764f61b1d1b52574dff121ce3057ed9a2791b059752cb80d76e6a5ae323e3765"
	I1003 18:30:21.596760  294422 cri.go:89] found id: "5520f176a27b0060104f01c653743a97419cd7df90b959dc02f4359563db372f"
	I1003 18:30:21.596764  294422 cri.go:89] found id: "a55dd027b4c2417a4e857716af2ec80adf3ee359efc1fcdea96ae017da8094db"
	I1003 18:30:21.596768  294422 cri.go:89] found id: "d11765424ad977c42ad7828e106df59281b6041a6b85d34d604738d051cc2257"
	I1003 18:30:21.596772  294422 cri.go:89] found id: "ba5695d849b4ff437b5c5a4c73351652ea5b855eb0061d3826ad4a2a76513650"
	I1003 18:30:21.596775  294422 cri.go:89] found id: "351cf9cd8e8f80a1ce058ad47867cc1e9e314f2100ba10ef01326c91fbea576c"
	I1003 18:30:21.596778  294422 cri.go:89] found id: "c2d0db82bc7f2bcfc4af04f3633a094c0e554392449fbf12a24ed377b92f941b"
	I1003 18:30:21.596781  294422 cri.go:89] found id: "5925d6c423d79839f9eb8870977fb293e3c6b1ece77aa59bf7c2a4b120ca3ad3"
	I1003 18:30:21.596788  294422 cri.go:89] found id: "228036e3d30218b16026d557d3264fc361f0c7c42c143fc93a96fd7945d8bdf3"
	I1003 18:30:21.596792  294422 cri.go:89] found id: "d38c57e36e3594ef4f8f3d28db24890c659027ed75977701aa969ce142c27e0e"
	I1003 18:30:21.596795  294422 cri.go:89] found id: "8ab3974a2c302b83e53bc5a243fae87bdec8ed1ca2da979ebcc29dabb8f30fc4"
	I1003 18:30:21.596797  294422 cri.go:89] found id: "70497b5707570324a85bde79dadf41e8e6ded9bd45545ee1a7756ba32eed86d6"
	I1003 18:30:21.596801  294422 cri.go:89] found id: "26742750260bfb48e7909f410307ee53b3dafe6b84bb3a467c505e24d28d4fe1"
	I1003 18:30:21.596803  294422 cri.go:89] found id: "7099c81ca982b78bfa4dd5784e69f027f40fb02b99bce69ec1f792090be6a50b"
	I1003 18:30:21.596811  294422 cri.go:89] found id: "2657f869bb8529138f74b802beedcd922a626ac30c50e54c72731eaff1b930c0"
	I1003 18:30:21.596816  294422 cri.go:89] found id: "82907fef03cc43b849878194de7aef8c729ee89dcf5fddba29650a239ab81e90"
	I1003 18:30:21.596820  294422 cri.go:89] found id: "28257b7548dee5496025c494fc69f7d27b158c004459fe9cf7e145244cc402b4"
	I1003 18:30:21.596823  294422 cri.go:89] found id: "1a59139ec0face1693267071ca3c3ba3e8eff397418ffbf25f3682c68eee244a"
	I1003 18:30:21.596826  294422 cri.go:89] found id: "23bd53ece83d04d894e5fc60fda04a6f8bdfe8d6c59ffad6c4dcacc168ec4ed8"
	I1003 18:30:21.596830  294422 cri.go:89] found id: "1cbcaf90a28158f2a4d5495c4b92561650195912704daec05dcf1d9b56429e5c"
	I1003 18:30:21.596834  294422 cri.go:89] found id: "22981c6dff74a1d10571b76dae9b7bbbb33ca3843ab35927e1e5997100c5be1c"
	I1003 18:30:21.596837  294422 cri.go:89] found id: "e937e437e1e79c6bcbb92c82ee9849b6f8ceb2c5980d23b084e27a6fb88ab45a"
	I1003 18:30:21.596840  294422 cri.go:89] found id: ""
	I1003 18:30:21.596893  294422 ssh_runner.go:195] Run: sudo runc list -f json
	I1003 18:30:21.613776  294422 out.go:203] 
	W1003 18:30:21.616830  294422 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-03T18:30:21Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-03T18:30:21Z" level=error msg="open /run/runc: no such file or directory"
	
	W1003 18:30:21.616913  294422 out.go:285] * 
	* 
	W1003 18:30:21.625430  294422 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 18:30:21.628470  294422 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-arm64 -p addons-952140 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.36s)

                                                
                                    
x
+
TestAddons/parallel/CSI (40.32s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1003 18:30:18.423331  286434 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1003 18:30:18.427422  286434 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1003 18:30:18.427447  286434 kapi.go:107] duration metric: took 4.126784ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 4.135933ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-952140 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-952140 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-952140 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-952140 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-952140 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-952140 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-952140 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-952140 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-952140 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-952140 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-952140 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-952140 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-952140 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-952140 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-952140 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-952140 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-952140 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [9b0ec362-2665-4b50-bafa-882018718e00] Pending
helpers_test.go:352: "task-pv-pod" [9b0ec362-2665-4b50-bafa-882018718e00] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [9b0ec362-2665-4b50-bafa-882018718e00] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.003207241s
addons_test.go:572: (dbg) Run:  kubectl --context addons-952140 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-952140 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-952140 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-952140 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-952140 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-952140 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-952140 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-952140 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-952140 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-952140 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [6a94ea8c-c951-4508-9644-092ebe111a17] Pending
helpers_test.go:352: "task-pv-pod-restore" [6a94ea8c-c951-4508-9644-092ebe111a17] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [6a94ea8c-c951-4508-9644-092ebe111a17] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003473502s
addons_test.go:614: (dbg) Run:  kubectl --context addons-952140 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-952140 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-952140 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-952140 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-952140 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (299.620237ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 18:30:58.277832  295378 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:30:58.278561  295378 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:30:58.278599  295378 out.go:374] Setting ErrFile to fd 2...
	I1003 18:30:58.278623  295378 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:30:58.278932  295378 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-284583/.minikube/bin
	I1003 18:30:58.279352  295378 mustload.go:65] Loading cluster: addons-952140
	I1003 18:30:58.279760  295378 config.go:182] Loaded profile config "addons-952140": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:30:58.279828  295378 addons.go:606] checking whether the cluster is paused
	I1003 18:30:58.279992  295378 config.go:182] Loaded profile config "addons-952140": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:30:58.280028  295378 host.go:66] Checking if "addons-952140" exists ...
	I1003 18:30:58.281080  295378 cli_runner.go:164] Run: docker container inspect addons-952140 --format={{.State.Status}}
	I1003 18:30:58.299007  295378 ssh_runner.go:195] Run: systemctl --version
	I1003 18:30:58.299059  295378 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952140
	I1003 18:30:58.317690  295378 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/addons-952140/id_rsa Username:docker}
	I1003 18:30:58.415794  295378 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1003 18:30:58.415901  295378 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1003 18:30:58.453862  295378 cri.go:89] found id: "a2a54b8525b1b03c3294a286e260174ebeb999c736e2e29750632824346e2b8a"
	I1003 18:30:58.453943  295378 cri.go:89] found id: "764f61b1d1b52574dff121ce3057ed9a2791b059752cb80d76e6a5ae323e3765"
	I1003 18:30:58.453963  295378 cri.go:89] found id: "5520f176a27b0060104f01c653743a97419cd7df90b959dc02f4359563db372f"
	I1003 18:30:58.453979  295378 cri.go:89] found id: "a55dd027b4c2417a4e857716af2ec80adf3ee359efc1fcdea96ae017da8094db"
	I1003 18:30:58.454012  295378 cri.go:89] found id: "d11765424ad977c42ad7828e106df59281b6041a6b85d34d604738d051cc2257"
	I1003 18:30:58.454034  295378 cri.go:89] found id: "ba5695d849b4ff437b5c5a4c73351652ea5b855eb0061d3826ad4a2a76513650"
	I1003 18:30:58.454055  295378 cri.go:89] found id: "351cf9cd8e8f80a1ce058ad47867cc1e9e314f2100ba10ef01326c91fbea576c"
	I1003 18:30:58.454092  295378 cri.go:89] found id: "c2d0db82bc7f2bcfc4af04f3633a094c0e554392449fbf12a24ed377b92f941b"
	I1003 18:30:58.454114  295378 cri.go:89] found id: "5925d6c423d79839f9eb8870977fb293e3c6b1ece77aa59bf7c2a4b120ca3ad3"
	I1003 18:30:58.454134  295378 cri.go:89] found id: "228036e3d30218b16026d557d3264fc361f0c7c42c143fc93a96fd7945d8bdf3"
	I1003 18:30:58.454153  295378 cri.go:89] found id: "d38c57e36e3594ef4f8f3d28db24890c659027ed75977701aa969ce142c27e0e"
	I1003 18:30:58.454181  295378 cri.go:89] found id: "8ab3974a2c302b83e53bc5a243fae87bdec8ed1ca2da979ebcc29dabb8f30fc4"
	I1003 18:30:58.454204  295378 cri.go:89] found id: "70497b5707570324a85bde79dadf41e8e6ded9bd45545ee1a7756ba32eed86d6"
	I1003 18:30:58.454221  295378 cri.go:89] found id: "26742750260bfb48e7909f410307ee53b3dafe6b84bb3a467c505e24d28d4fe1"
	I1003 18:30:58.454241  295378 cri.go:89] found id: "7099c81ca982b78bfa4dd5784e69f027f40fb02b99bce69ec1f792090be6a50b"
	I1003 18:30:58.454272  295378 cri.go:89] found id: "2657f869bb8529138f74b802beedcd922a626ac30c50e54c72731eaff1b930c0"
	I1003 18:30:58.454303  295378 cri.go:89] found id: "82907fef03cc43b849878194de7aef8c729ee89dcf5fddba29650a239ab81e90"
	I1003 18:30:58.454321  295378 cri.go:89] found id: "28257b7548dee5496025c494fc69f7d27b158c004459fe9cf7e145244cc402b4"
	I1003 18:30:58.454353  295378 cri.go:89] found id: "1a59139ec0face1693267071ca3c3ba3e8eff397418ffbf25f3682c68eee244a"
	I1003 18:30:58.454375  295378 cri.go:89] found id: "23bd53ece83d04d894e5fc60fda04a6f8bdfe8d6c59ffad6c4dcacc168ec4ed8"
	I1003 18:30:58.454393  295378 cri.go:89] found id: "1cbcaf90a28158f2a4d5495c4b92561650195912704daec05dcf1d9b56429e5c"
	I1003 18:30:58.454412  295378 cri.go:89] found id: "22981c6dff74a1d10571b76dae9b7bbbb33ca3843ab35927e1e5997100c5be1c"
	I1003 18:30:58.454440  295378 cri.go:89] found id: "e937e437e1e79c6bcbb92c82ee9849b6f8ceb2c5980d23b084e27a6fb88ab45a"
	I1003 18:30:58.454462  295378 cri.go:89] found id: ""
	I1003 18:30:58.454564  295378 ssh_runner.go:195] Run: sudo runc list -f json
	I1003 18:30:58.470640  295378 out.go:203] 
	W1003 18:30:58.473619  295378 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-03T18:30:58Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-03T18:30:58Z" level=error msg="open /run/runc: no such file or directory"
	
	W1003 18:30:58.473651  295378 out.go:285] * 
	* 
	W1003 18:30:58.480191  295378 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 18:30:58.483225  295378 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-arm64 -p addons-952140 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-952140 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-952140 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (252.921379ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 18:30:58.543491  295422 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:30:58.544298  295422 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:30:58.544313  295422 out.go:374] Setting ErrFile to fd 2...
	I1003 18:30:58.544319  295422 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:30:58.544596  295422 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-284583/.minikube/bin
	I1003 18:30:58.544987  295422 mustload.go:65] Loading cluster: addons-952140
	I1003 18:30:58.545356  295422 config.go:182] Loaded profile config "addons-952140": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:30:58.545374  295422 addons.go:606] checking whether the cluster is paused
	I1003 18:30:58.545479  295422 config.go:182] Loaded profile config "addons-952140": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:30:58.545494  295422 host.go:66] Checking if "addons-952140" exists ...
	I1003 18:30:58.545995  295422 cli_runner.go:164] Run: docker container inspect addons-952140 --format={{.State.Status}}
	I1003 18:30:58.564283  295422 ssh_runner.go:195] Run: systemctl --version
	I1003 18:30:58.564341  295422 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952140
	I1003 18:30:58.581877  295422 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/addons-952140/id_rsa Username:docker}
	I1003 18:30:58.675820  295422 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1003 18:30:58.675953  295422 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1003 18:30:58.707297  295422 cri.go:89] found id: "a2a54b8525b1b03c3294a286e260174ebeb999c736e2e29750632824346e2b8a"
	I1003 18:30:58.707322  295422 cri.go:89] found id: "764f61b1d1b52574dff121ce3057ed9a2791b059752cb80d76e6a5ae323e3765"
	I1003 18:30:58.707327  295422 cri.go:89] found id: "5520f176a27b0060104f01c653743a97419cd7df90b959dc02f4359563db372f"
	I1003 18:30:58.707336  295422 cri.go:89] found id: "a55dd027b4c2417a4e857716af2ec80adf3ee359efc1fcdea96ae017da8094db"
	I1003 18:30:58.707340  295422 cri.go:89] found id: "d11765424ad977c42ad7828e106df59281b6041a6b85d34d604738d051cc2257"
	I1003 18:30:58.707344  295422 cri.go:89] found id: "ba5695d849b4ff437b5c5a4c73351652ea5b855eb0061d3826ad4a2a76513650"
	I1003 18:30:58.707347  295422 cri.go:89] found id: "351cf9cd8e8f80a1ce058ad47867cc1e9e314f2100ba10ef01326c91fbea576c"
	I1003 18:30:58.707350  295422 cri.go:89] found id: "c2d0db82bc7f2bcfc4af04f3633a094c0e554392449fbf12a24ed377b92f941b"
	I1003 18:30:58.707358  295422 cri.go:89] found id: "5925d6c423d79839f9eb8870977fb293e3c6b1ece77aa59bf7c2a4b120ca3ad3"
	I1003 18:30:58.707364  295422 cri.go:89] found id: "228036e3d30218b16026d557d3264fc361f0c7c42c143fc93a96fd7945d8bdf3"
	I1003 18:30:58.707368  295422 cri.go:89] found id: "d38c57e36e3594ef4f8f3d28db24890c659027ed75977701aa969ce142c27e0e"
	I1003 18:30:58.707371  295422 cri.go:89] found id: "8ab3974a2c302b83e53bc5a243fae87bdec8ed1ca2da979ebcc29dabb8f30fc4"
	I1003 18:30:58.707375  295422 cri.go:89] found id: "70497b5707570324a85bde79dadf41e8e6ded9bd45545ee1a7756ba32eed86d6"
	I1003 18:30:58.707378  295422 cri.go:89] found id: "26742750260bfb48e7909f410307ee53b3dafe6b84bb3a467c505e24d28d4fe1"
	I1003 18:30:58.707381  295422 cri.go:89] found id: "7099c81ca982b78bfa4dd5784e69f027f40fb02b99bce69ec1f792090be6a50b"
	I1003 18:30:58.707386  295422 cri.go:89] found id: "2657f869bb8529138f74b802beedcd922a626ac30c50e54c72731eaff1b930c0"
	I1003 18:30:58.707397  295422 cri.go:89] found id: "82907fef03cc43b849878194de7aef8c729ee89dcf5fddba29650a239ab81e90"
	I1003 18:30:58.707401  295422 cri.go:89] found id: "28257b7548dee5496025c494fc69f7d27b158c004459fe9cf7e145244cc402b4"
	I1003 18:30:58.707405  295422 cri.go:89] found id: "1a59139ec0face1693267071ca3c3ba3e8eff397418ffbf25f3682c68eee244a"
	I1003 18:30:58.707408  295422 cri.go:89] found id: "23bd53ece83d04d894e5fc60fda04a6f8bdfe8d6c59ffad6c4dcacc168ec4ed8"
	I1003 18:30:58.707412  295422 cri.go:89] found id: "1cbcaf90a28158f2a4d5495c4b92561650195912704daec05dcf1d9b56429e5c"
	I1003 18:30:58.707415  295422 cri.go:89] found id: "22981c6dff74a1d10571b76dae9b7bbbb33ca3843ab35927e1e5997100c5be1c"
	I1003 18:30:58.707418  295422 cri.go:89] found id: "e937e437e1e79c6bcbb92c82ee9849b6f8ceb2c5980d23b084e27a6fb88ab45a"
	I1003 18:30:58.707421  295422 cri.go:89] found id: ""
	I1003 18:30:58.707480  295422 ssh_runner.go:195] Run: sudo runc list -f json
	I1003 18:30:58.723476  295422 out.go:203] 
	W1003 18:30:58.726457  295422 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-03T18:30:58Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-03T18:30:58Z" level=error msg="open /run/runc: no such file or directory"
	
	W1003 18:30:58.726484  295422 out.go:285] * 
	* 
	W1003 18:30:58.733009  295422 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 18:30:58.736057  295422 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-arm64 -p addons-952140 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (40.32s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (3.16s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-952140 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable headlamp -p addons-952140 --alsologtostderr -v=1: exit status 11 (248.822868ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 18:29:54.299097  293215 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:29:54.299960  293215 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:29:54.299974  293215 out.go:374] Setting ErrFile to fd 2...
	I1003 18:29:54.299979  293215 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:29:54.300235  293215 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-284583/.minikube/bin
	I1003 18:29:54.300534  293215 mustload.go:65] Loading cluster: addons-952140
	I1003 18:29:54.300946  293215 config.go:182] Loaded profile config "addons-952140": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:29:54.300966  293215 addons.go:606] checking whether the cluster is paused
	I1003 18:29:54.301072  293215 config.go:182] Loaded profile config "addons-952140": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:29:54.301087  293215 host.go:66] Checking if "addons-952140" exists ...
	I1003 18:29:54.301511  293215 cli_runner.go:164] Run: docker container inspect addons-952140 --format={{.State.Status}}
	I1003 18:29:54.318422  293215 ssh_runner.go:195] Run: systemctl --version
	I1003 18:29:54.318482  293215 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952140
	I1003 18:29:54.335665  293215 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/addons-952140/id_rsa Username:docker}
	I1003 18:29:54.431423  293215 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1003 18:29:54.431509  293215 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1003 18:29:54.461868  293215 cri.go:89] found id: "a2a54b8525b1b03c3294a286e260174ebeb999c736e2e29750632824346e2b8a"
	I1003 18:29:54.461942  293215 cri.go:89] found id: "764f61b1d1b52574dff121ce3057ed9a2791b059752cb80d76e6a5ae323e3765"
	I1003 18:29:54.461954  293215 cri.go:89] found id: "5520f176a27b0060104f01c653743a97419cd7df90b959dc02f4359563db372f"
	I1003 18:29:54.461959  293215 cri.go:89] found id: "a55dd027b4c2417a4e857716af2ec80adf3ee359efc1fcdea96ae017da8094db"
	I1003 18:29:54.461963  293215 cri.go:89] found id: "d11765424ad977c42ad7828e106df59281b6041a6b85d34d604738d051cc2257"
	I1003 18:29:54.461967  293215 cri.go:89] found id: "ba5695d849b4ff437b5c5a4c73351652ea5b855eb0061d3826ad4a2a76513650"
	I1003 18:29:54.461970  293215 cri.go:89] found id: "351cf9cd8e8f80a1ce058ad47867cc1e9e314f2100ba10ef01326c91fbea576c"
	I1003 18:29:54.461973  293215 cri.go:89] found id: "c2d0db82bc7f2bcfc4af04f3633a094c0e554392449fbf12a24ed377b92f941b"
	I1003 18:29:54.461980  293215 cri.go:89] found id: "5925d6c423d79839f9eb8870977fb293e3c6b1ece77aa59bf7c2a4b120ca3ad3"
	I1003 18:29:54.461994  293215 cri.go:89] found id: "228036e3d30218b16026d557d3264fc361f0c7c42c143fc93a96fd7945d8bdf3"
	I1003 18:29:54.462006  293215 cri.go:89] found id: "d38c57e36e3594ef4f8f3d28db24890c659027ed75977701aa969ce142c27e0e"
	I1003 18:29:54.462010  293215 cri.go:89] found id: "8ab3974a2c302b83e53bc5a243fae87bdec8ed1ca2da979ebcc29dabb8f30fc4"
	I1003 18:29:54.462013  293215 cri.go:89] found id: "70497b5707570324a85bde79dadf41e8e6ded9bd45545ee1a7756ba32eed86d6"
	I1003 18:29:54.462016  293215 cri.go:89] found id: "26742750260bfb48e7909f410307ee53b3dafe6b84bb3a467c505e24d28d4fe1"
	I1003 18:29:54.462020  293215 cri.go:89] found id: "7099c81ca982b78bfa4dd5784e69f027f40fb02b99bce69ec1f792090be6a50b"
	I1003 18:29:54.462025  293215 cri.go:89] found id: "2657f869bb8529138f74b802beedcd922a626ac30c50e54c72731eaff1b930c0"
	I1003 18:29:54.462034  293215 cri.go:89] found id: "82907fef03cc43b849878194de7aef8c729ee89dcf5fddba29650a239ab81e90"
	I1003 18:29:54.462038  293215 cri.go:89] found id: "28257b7548dee5496025c494fc69f7d27b158c004459fe9cf7e145244cc402b4"
	I1003 18:29:54.462041  293215 cri.go:89] found id: "1a59139ec0face1693267071ca3c3ba3e8eff397418ffbf25f3682c68eee244a"
	I1003 18:29:54.462044  293215 cri.go:89] found id: "23bd53ece83d04d894e5fc60fda04a6f8bdfe8d6c59ffad6c4dcacc168ec4ed8"
	I1003 18:29:54.462048  293215 cri.go:89] found id: "1cbcaf90a28158f2a4d5495c4b92561650195912704daec05dcf1d9b56429e5c"
	I1003 18:29:54.462058  293215 cri.go:89] found id: "22981c6dff74a1d10571b76dae9b7bbbb33ca3843ab35927e1e5997100c5be1c"
	I1003 18:29:54.462062  293215 cri.go:89] found id: "e937e437e1e79c6bcbb92c82ee9849b6f8ceb2c5980d23b084e27a6fb88ab45a"
	I1003 18:29:54.462064  293215 cri.go:89] found id: ""
	I1003 18:29:54.462114  293215 ssh_runner.go:195] Run: sudo runc list -f json
	I1003 18:29:54.476833  293215 out.go:203] 
	W1003 18:29:54.479595  293215 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-03T18:29:54Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-03T18:29:54Z" level=error msg="open /run/runc: no such file or directory"
	
	W1003 18:29:54.479618  293215 out.go:285] * 
	* 
	W1003 18:29:54.486179  293215 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 18:29:54.489293  293215 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-arm64 addons enable headlamp -p addons-952140 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-952140
helpers_test.go:243: (dbg) docker inspect addons-952140:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "85b69962c0dc4c2d215c8870f97829566d7c577f428241564d0dd056e84304a6",
	        "Created": "2025-10-03T18:27:19.855189615Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 287583,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-03T18:27:19.913676844Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/85b69962c0dc4c2d215c8870f97829566d7c577f428241564d0dd056e84304a6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/85b69962c0dc4c2d215c8870f97829566d7c577f428241564d0dd056e84304a6/hostname",
	        "HostsPath": "/var/lib/docker/containers/85b69962c0dc4c2d215c8870f97829566d7c577f428241564d0dd056e84304a6/hosts",
	        "LogPath": "/var/lib/docker/containers/85b69962c0dc4c2d215c8870f97829566d7c577f428241564d0dd056e84304a6/85b69962c0dc4c2d215c8870f97829566d7c577f428241564d0dd056e84304a6-json.log",
	        "Name": "/addons-952140",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-952140:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-952140",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "85b69962c0dc4c2d215c8870f97829566d7c577f428241564d0dd056e84304a6",
	                "LowerDir": "/var/lib/docker/overlay2/af2feed79df5584ff68bcd67773f16b1405a7fad3408cae5965483c88a8058de-init/diff:/var/lib/docker/overlay2/87b205803817b0b71a214d995ab7e10a92033bbf72d76d6e052f1d21ccecb313/diff",
	                "MergedDir": "/var/lib/docker/overlay2/af2feed79df5584ff68bcd67773f16b1405a7fad3408cae5965483c88a8058de/merged",
	                "UpperDir": "/var/lib/docker/overlay2/af2feed79df5584ff68bcd67773f16b1405a7fad3408cae5965483c88a8058de/diff",
	                "WorkDir": "/var/lib/docker/overlay2/af2feed79df5584ff68bcd67773f16b1405a7fad3408cae5965483c88a8058de/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-952140",
	                "Source": "/var/lib/docker/volumes/addons-952140/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-952140",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-952140",
	                "name.minikube.sigs.k8s.io": "addons-952140",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "08b67d1352c657f54ec558bf835b927545829aa9a1fb88449a14ba61bd7df350",
	            "SandboxKey": "/var/run/docker/netns/08b67d1352c6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-952140": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8e:55:f8:24:f6:5c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e102fc4e4e8b2c7a717e7cf7e622833192d1b0f46c494a0da77e2c59f148cd18",
	                    "EndpointID": "5396c29e50147b89cfaea761a6acbfd0662c08046a695d7b749d5f110fd8d0fa",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-952140",
	                        "85b69962c0dc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-952140 -n addons-952140
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-952140 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-952140 logs -n 25: (1.425814148s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-487194 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-487194   │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ delete  │ -p download-only-487194                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-487194   │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ start   │ -o=json --download-only -p download-only-217819 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-217819   │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ delete  │ -p download-only-217819                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-217819   │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ delete  │ -p download-only-487194                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-487194   │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ delete  │ -p download-only-217819                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-217819   │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ start   │ --download-only -p download-docker-526019 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-526019 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ delete  │ -p download-docker-526019                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-526019 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ start   │ --download-only -p binary-mirror-482654 --alsologtostderr --binary-mirror http://127.0.0.1:38575 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-482654   │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ delete  │ -p binary-mirror-482654                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-482654   │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ addons  │ enable dashboard -p addons-952140                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-952140          │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ addons  │ disable dashboard -p addons-952140                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-952140          │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ start   │ -p addons-952140 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-952140          │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:29 UTC │
	│ addons  │ addons-952140 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-952140          │ jenkins │ v1.37.0 │ 03 Oct 25 18:29 UTC │                     │
	│ addons  │ addons-952140 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-952140          │ jenkins │ v1.37.0 │ 03 Oct 25 18:29 UTC │                     │
	│ addons  │ enable headlamp -p addons-952140 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-952140          │ jenkins │ v1.37.0 │ 03 Oct 25 18:29 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/03 18:26:53
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 18:26:53.258370  287189 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:26:53.258537  287189 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:26:53.258566  287189 out.go:374] Setting ErrFile to fd 2...
	I1003 18:26:53.258587  287189 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:26:53.258967  287189 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-284583/.minikube/bin
	I1003 18:26:53.259863  287189 out.go:368] Setting JSON to false
	I1003 18:26:53.260750  287189 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":4165,"bootTime":1759511849,"procs":149,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1003 18:26:53.260817  287189 start.go:140] virtualization:  
	I1003 18:26:53.264033  287189 out.go:179] * [addons-952140] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1003 18:26:53.267739  287189 out.go:179]   - MINIKUBE_LOCATION=21625
	I1003 18:26:53.267804  287189 notify.go:220] Checking for updates...
	I1003 18:26:53.273340  287189 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 18:26:53.276093  287189 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21625-284583/kubeconfig
	I1003 18:26:53.278928  287189 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-284583/.minikube
	I1003 18:26:53.281777  287189 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1003 18:26:53.284697  287189 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 18:26:53.287745  287189 driver.go:421] Setting default libvirt URI to qemu:///system
	I1003 18:26:53.307839  287189 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1003 18:26:53.307970  287189 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:26:53.373367  287189 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:49 SystemTime:2025-10-03 18:26:53.364113847 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1003 18:26:53.373470  287189 docker.go:318] overlay module found
	I1003 18:26:53.378356  287189 out.go:179] * Using the docker driver based on user configuration
	I1003 18:26:53.381255  287189 start.go:304] selected driver: docker
	I1003 18:26:53.381280  287189 start.go:924] validating driver "docker" against <nil>
	I1003 18:26:53.381294  287189 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 18:26:53.382012  287189 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:26:53.434317  287189 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:49 SystemTime:2025-10-03 18:26:53.424871215 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1003 18:26:53.434483  287189 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1003 18:26:53.434718  287189 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 18:26:53.437564  287189 out.go:179] * Using Docker driver with root privileges
	I1003 18:26:53.440295  287189 cni.go:84] Creating CNI manager for ""
	I1003 18:26:53.440365  287189 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 18:26:53.440378  287189 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1003 18:26:53.440468  287189 start.go:348] cluster config:
	{Name:addons-952140 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-952140 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1003 18:26:53.443515  287189 out.go:179] * Starting "addons-952140" primary control-plane node in "addons-952140" cluster
	I1003 18:26:53.446262  287189 cache.go:123] Beginning downloading kic base image for docker with crio
	I1003 18:26:53.449094  287189 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1003 18:26:53.451835  287189 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 18:26:53.451864  287189 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1003 18:26:53.451885  287189 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21625-284583/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1003 18:26:53.451903  287189 cache.go:58] Caching tarball of preloaded images
	I1003 18:26:53.451981  287189 preload.go:233] Found /home/jenkins/minikube-integration/21625-284583/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1003 18:26:53.451991  287189 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1003 18:26:53.452349  287189 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/config.json ...
	I1003 18:26:53.452371  287189 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/config.json: {Name:mk3ec801b1a665b1e71f8e04e2ef22390583bd1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:26:53.468260  287189 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d to local cache
	I1003 18:26:53.468397  287189 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory
	I1003 18:26:53.468433  287189 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory, skipping pull
	I1003 18:26:53.468443  287189 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in cache, skipping pull
	I1003 18:26:53.468450  287189 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d as a tarball
	I1003 18:26:53.468460  287189 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d from local cache
	I1003 18:27:11.531443  287189 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d from cached tarball
	I1003 18:27:11.531482  287189 cache.go:232] Successfully downloaded all kic artifacts
	I1003 18:27:11.531512  287189 start.go:360] acquireMachinesLock for addons-952140: {Name:mkd6a11acda609d82d4d50b6e8e52d51cc676e0e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 18:27:11.531635  287189 start.go:364] duration metric: took 97.671µs to acquireMachinesLock for "addons-952140"
	I1003 18:27:11.531667  287189 start.go:93] Provisioning new machine with config: &{Name:addons-952140 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-952140 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1003 18:27:11.531753  287189 start.go:125] createHost starting for "" (driver="docker")
	I1003 18:27:11.535205  287189 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1003 18:27:11.535450  287189 start.go:159] libmachine.API.Create for "addons-952140" (driver="docker")
	I1003 18:27:11.535499  287189 client.go:168] LocalClient.Create starting
	I1003 18:27:11.535623  287189 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem
	I1003 18:27:12.987225  287189 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem
	I1003 18:27:13.052378  287189 cli_runner.go:164] Run: docker network inspect addons-952140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1003 18:27:13.068576  287189 cli_runner.go:211] docker network inspect addons-952140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1003 18:27:13.068677  287189 network_create.go:284] running [docker network inspect addons-952140] to gather additional debugging logs...
	I1003 18:27:13.068700  287189 cli_runner.go:164] Run: docker network inspect addons-952140
	W1003 18:27:13.084782  287189 cli_runner.go:211] docker network inspect addons-952140 returned with exit code 1
	I1003 18:27:13.084831  287189 network_create.go:287] error running [docker network inspect addons-952140]: docker network inspect addons-952140: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-952140 not found
	I1003 18:27:13.084846  287189 network_create.go:289] output of [docker network inspect addons-952140]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-952140 not found
	
	** /stderr **
	I1003 18:27:13.084967  287189 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 18:27:13.102476  287189 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001d260a0}
	I1003 18:27:13.102515  287189 network_create.go:124] attempt to create docker network addons-952140 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1003 18:27:13.102573  287189 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-952140 addons-952140
	I1003 18:27:13.159876  287189 network_create.go:108] docker network addons-952140 192.168.49.0/24 created
	I1003 18:27:13.159910  287189 kic.go:121] calculated static IP "192.168.49.2" for the "addons-952140" container
	I1003 18:27:13.159994  287189 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1003 18:27:13.175310  287189 cli_runner.go:164] Run: docker volume create addons-952140 --label name.minikube.sigs.k8s.io=addons-952140 --label created_by.minikube.sigs.k8s.io=true
	I1003 18:27:13.196077  287189 oci.go:103] Successfully created a docker volume addons-952140
	I1003 18:27:13.196196  287189 cli_runner.go:164] Run: docker run --rm --name addons-952140-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-952140 --entrypoint /usr/bin/test -v addons-952140:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1003 18:27:15.341088  287189 cli_runner.go:217] Completed: docker run --rm --name addons-952140-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-952140 --entrypoint /usr/bin/test -v addons-952140:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib: (2.144842495s)
	I1003 18:27:15.341121  287189 oci.go:107] Successfully prepared a docker volume addons-952140
	I1003 18:27:15.341155  287189 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 18:27:15.341176  287189 kic.go:194] Starting extracting preloaded images to volume ...
	I1003 18:27:15.341250  287189 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21625-284583/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-952140:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1003 18:27:19.781444  287189 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21625-284583/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-952140:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.440151134s)
	I1003 18:27:19.781476  287189 kic.go:203] duration metric: took 4.440297572s to extract preloaded images to volume ...
	W1003 18:27:19.781627  287189 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1003 18:27:19.781738  287189 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1003 18:27:19.840410  287189 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-952140 --name addons-952140 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-952140 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-952140 --network addons-952140 --ip 192.168.49.2 --volume addons-952140:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1003 18:27:20.168240  287189 cli_runner.go:164] Run: docker container inspect addons-952140 --format={{.State.Running}}
	I1003 18:27:20.187708  287189 cli_runner.go:164] Run: docker container inspect addons-952140 --format={{.State.Status}}
	I1003 18:27:20.215470  287189 cli_runner.go:164] Run: docker exec addons-952140 stat /var/lib/dpkg/alternatives/iptables
	I1003 18:27:20.269170  287189 oci.go:144] the created container "addons-952140" has a running status.
	I1003 18:27:20.269198  287189 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21625-284583/.minikube/machines/addons-952140/id_rsa...
	I1003 18:27:20.371880  287189 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21625-284583/.minikube/machines/addons-952140/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1003 18:27:20.392209  287189 cli_runner.go:164] Run: docker container inspect addons-952140 --format={{.State.Status}}
	I1003 18:27:20.409313  287189 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1003 18:27:20.409335  287189 kic_runner.go:114] Args: [docker exec --privileged addons-952140 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1003 18:27:20.463222  287189 cli_runner.go:164] Run: docker container inspect addons-952140 --format={{.State.Status}}
	I1003 18:27:20.501808  287189 machine.go:93] provisionDockerMachine start ...
	I1003 18:27:20.501919  287189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952140
	I1003 18:27:20.523050  287189 main.go:141] libmachine: Using SSH client type: native
	I1003 18:27:20.523428  287189 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1003 18:27:20.523445  287189 main.go:141] libmachine: About to run SSH command:
	hostname
	I1003 18:27:20.524034  287189 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1003 18:27:23.660422  287189 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-952140
	
	I1003 18:27:23.660447  287189 ubuntu.go:182] provisioning hostname "addons-952140"
	I1003 18:27:23.660515  287189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952140
	I1003 18:27:23.678923  287189 main.go:141] libmachine: Using SSH client type: native
	I1003 18:27:23.679241  287189 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1003 18:27:23.679260  287189 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-952140 && echo "addons-952140" | sudo tee /etc/hostname
	I1003 18:27:23.817967  287189 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-952140
	
	I1003 18:27:23.818049  287189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952140
	I1003 18:27:23.835450  287189 main.go:141] libmachine: Using SSH client type: native
	I1003 18:27:23.835760  287189 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1003 18:27:23.835781  287189 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-952140' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-952140/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-952140' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 18:27:23.965132  287189 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 18:27:23.965205  287189 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21625-284583/.minikube CaCertPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21625-284583/.minikube}
	I1003 18:27:23.965231  287189 ubuntu.go:190] setting up certificates
	I1003 18:27:23.965240  287189 provision.go:84] configureAuth start
	I1003 18:27:23.965309  287189 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-952140
	I1003 18:27:23.997785  287189 provision.go:143] copyHostCerts
	I1003 18:27:23.997878  287189 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21625-284583/.minikube/key.pem (1675 bytes)
	I1003 18:27:23.998039  287189 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21625-284583/.minikube/ca.pem (1082 bytes)
	I1003 18:27:23.998110  287189 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21625-284583/.minikube/cert.pem (1123 bytes)
	I1003 18:27:23.998164  287189 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21625-284583/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca-key.pem org=jenkins.addons-952140 san=[127.0.0.1 192.168.49.2 addons-952140 localhost minikube]
	I1003 18:27:24.847072  287189 provision.go:177] copyRemoteCerts
	I1003 18:27:24.847166  287189 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 18:27:24.847217  287189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952140
	I1003 18:27:24.863338  287189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/addons-952140/id_rsa Username:docker}
	I1003 18:27:24.955970  287189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 18:27:24.973258  287189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1003 18:27:24.992658  287189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1003 18:27:25.011090  287189 provision.go:87] duration metric: took 1.045836815s to configureAuth
	I1003 18:27:25.011160  287189 ubuntu.go:206] setting minikube options for container-runtime
	I1003 18:27:25.011378  287189 config.go:182] Loaded profile config "addons-952140": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:27:25.011522  287189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952140
	I1003 18:27:25.028437  287189 main.go:141] libmachine: Using SSH client type: native
	I1003 18:27:25.028815  287189 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1003 18:27:25.028840  287189 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1003 18:27:25.266764  287189 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1003 18:27:25.266790  287189 machine.go:96] duration metric: took 4.76496131s to provisionDockerMachine
	I1003 18:27:25.266800  287189 client.go:171] duration metric: took 13.731291012s to LocalClient.Create
	I1003 18:27:25.266813  287189 start.go:167] duration metric: took 13.731365388s to libmachine.API.Create "addons-952140"
	I1003 18:27:25.266819  287189 start.go:293] postStartSetup for "addons-952140" (driver="docker")
	I1003 18:27:25.266829  287189 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 18:27:25.266896  287189 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 18:27:25.266942  287189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952140
	I1003 18:27:25.284199  287189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/addons-952140/id_rsa Username:docker}
	I1003 18:27:25.380680  287189 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 18:27:25.383911  287189 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1003 18:27:25.383939  287189 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1003 18:27:25.383949  287189 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-284583/.minikube/addons for local assets ...
	I1003 18:27:25.384016  287189 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-284583/.minikube/files for local assets ...
	I1003 18:27:25.384044  287189 start.go:296] duration metric: took 117.219209ms for postStartSetup
	I1003 18:27:25.384362  287189 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-952140
	I1003 18:27:25.400538  287189 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/config.json ...
	I1003 18:27:25.400955  287189 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 18:27:25.401020  287189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952140
	I1003 18:27:25.417502  287189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/addons-952140/id_rsa Username:docker}
	I1003 18:27:25.509553  287189 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1003 18:27:25.514278  287189 start.go:128] duration metric: took 13.982508395s to createHost
	I1003 18:27:25.514348  287189 start.go:83] releasing machines lock for "addons-952140", held for 13.982697091s
	I1003 18:27:25.514453  287189 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-952140
	I1003 18:27:25.533854  287189 ssh_runner.go:195] Run: cat /version.json
	I1003 18:27:25.533913  287189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952140
	I1003 18:27:25.534165  287189 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 18:27:25.534232  287189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952140
	I1003 18:27:25.552216  287189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/addons-952140/id_rsa Username:docker}
	I1003 18:27:25.554662  287189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/addons-952140/id_rsa Username:docker}
	I1003 18:27:25.739493  287189 ssh_runner.go:195] Run: systemctl --version
	I1003 18:27:25.746062  287189 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1003 18:27:25.782177  287189 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1003 18:27:25.786238  287189 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 18:27:25.786305  287189 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 18:27:25.814818  287189 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1003 18:27:25.814857  287189 start.go:495] detecting cgroup driver to use...
	I1003 18:27:25.814893  287189 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1003 18:27:25.814959  287189 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 18:27:25.832053  287189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 18:27:25.844195  287189 docker.go:218] disabling cri-docker service (if available) ...
	I1003 18:27:25.844258  287189 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1003 18:27:25.862182  287189 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1003 18:27:25.880466  287189 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1003 18:27:25.994085  287189 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1003 18:27:26.114139  287189 docker.go:234] disabling docker service ...
	I1003 18:27:26.114226  287189 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1003 18:27:26.137484  287189 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1003 18:27:26.150396  287189 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1003 18:27:26.256024  287189 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1003 18:27:26.372857  287189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1003 18:27:26.385960  287189 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 18:27:26.399988  287189 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1003 18:27:26.400053  287189 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:27:26.408800  287189 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1003 18:27:26.408934  287189 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:27:26.418191  287189 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:27:26.426697  287189 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:27:26.435134  287189 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 18:27:26.443038  287189 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:27:26.451740  287189 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:27:26.464608  287189 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:27:26.473357  287189 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 18:27:26.480576  287189 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 18:27:26.488014  287189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:27:26.597698  287189 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1003 18:27:26.725699  287189 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1003 18:27:26.725788  287189 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1003 18:27:26.729960  287189 start.go:563] Will wait 60s for crictl version
	I1003 18:27:26.730024  287189 ssh_runner.go:195] Run: which crictl
	I1003 18:27:26.733487  287189 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1003 18:27:26.763051  287189 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1003 18:27:26.763156  287189 ssh_runner.go:195] Run: crio --version
	I1003 18:27:26.791411  287189 ssh_runner.go:195] Run: crio --version
	I1003 18:27:26.823819  287189 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1003 18:27:26.826690  287189 cli_runner.go:164] Run: docker network inspect addons-952140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 18:27:26.842456  287189 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1003 18:27:26.846343  287189 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 18:27:26.856409  287189 kubeadm.go:883] updating cluster {Name:addons-952140 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-952140 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1003 18:27:26.856530  287189 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 18:27:26.856597  287189 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 18:27:26.890649  287189 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 18:27:26.890674  287189 crio.go:433] Images already preloaded, skipping extraction
	I1003 18:27:26.890737  287189 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 18:27:26.918855  287189 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 18:27:26.918880  287189 cache_images.go:85] Images are preloaded, skipping loading
	I1003 18:27:26.918889  287189 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1003 18:27:26.918977  287189 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-952140 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-952140 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1003 18:27:26.919068  287189 ssh_runner.go:195] Run: crio config
	I1003 18:27:26.974117  287189 cni.go:84] Creating CNI manager for ""
	I1003 18:27:26.974143  287189 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 18:27:26.974164  287189 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1003 18:27:26.974187  287189 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-952140 NodeName:addons-952140 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1003 18:27:26.974330  287189 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-952140"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 18:27:26.974414  287189 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1003 18:27:26.984460  287189 binaries.go:44] Found k8s binaries, skipping transfer
	I1003 18:27:26.984544  287189 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1003 18:27:26.993387  287189 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1003 18:27:27.007458  287189 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 18:27:27.020629  287189 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1003 18:27:27.033663  287189 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1003 18:27:27.037228  287189 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 18:27:27.046757  287189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:27:27.154280  287189 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 18:27:27.169428  287189 certs.go:69] Setting up /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140 for IP: 192.168.49.2
	I1003 18:27:27.169466  287189 certs.go:195] generating shared ca certs ...
	I1003 18:27:27.169483  287189 certs.go:227] acquiring lock for ca certs: {Name:mk5a10e6c921326e9c211447576eaeb893259ba7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:27:27.169741  287189 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21625-284583/.minikube/ca.key
	I1003 18:27:28.051313  287189 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-284583/.minikube/ca.crt ...
	I1003 18:27:28.051369  287189 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/ca.crt: {Name:mk4762d571a7a8484888e142e032b018ed06ae45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:27:28.051576  287189 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-284583/.minikube/ca.key ...
	I1003 18:27:28.051590  287189 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/ca.key: {Name:mk3482c30285b4babfb26eaf5951feb9c1fe2920 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:27:28.051689  287189 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.key
	I1003 18:27:28.237762  287189 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.crt ...
	I1003 18:27:28.237792  287189 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.crt: {Name:mkafbd54c049b3bb6f950505f085641692ae365d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:27:28.237966  287189 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.key ...
	I1003 18:27:28.237979  287189 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.key: {Name:mkb1422c38587215187c66c3c57c750e98643381 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:27:28.238671  287189 certs.go:257] generating profile certs ...
	I1003 18:27:28.238742  287189 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/client.key
	I1003 18:27:28.238760  287189 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/client.crt with IP's: []
	I1003 18:27:28.489726  287189 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/client.crt ...
	I1003 18:27:28.489758  287189 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/client.crt: {Name:mk96a252ffc9b3e664309d46953d957d82a24126 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:27:28.489966  287189 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/client.key ...
	I1003 18:27:28.489984  287189 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/client.key: {Name:mk4a30a979ca12ac9d25eeaf2eb1b582a8e60aa8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:27:28.490077  287189 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/apiserver.key.f1fb8b4f
	I1003 18:27:28.490099  287189 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/apiserver.crt.f1fb8b4f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1003 18:27:28.765602  287189 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/apiserver.crt.f1fb8b4f ...
	I1003 18:27:28.765635  287189 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/apiserver.crt.f1fb8b4f: {Name:mk49522fcb61d177cb35d2e803b82ca25f278e14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:27:28.765813  287189 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/apiserver.key.f1fb8b4f ...
	I1003 18:27:28.765827  287189 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/apiserver.key.f1fb8b4f: {Name:mk77ffe454005fa8c41ea69a48307a698967e656 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:27:28.765927  287189 certs.go:382] copying /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/apiserver.crt.f1fb8b4f -> /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/apiserver.crt
	I1003 18:27:28.766021  287189 certs.go:386] copying /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/apiserver.key.f1fb8b4f -> /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/apiserver.key
	I1003 18:27:28.766080  287189 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/proxy-client.key
	I1003 18:27:28.766102  287189 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/proxy-client.crt with IP's: []
	I1003 18:27:29.595531  287189 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/proxy-client.crt ...
	I1003 18:27:29.595564  287189 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/proxy-client.crt: {Name:mk3be2dd7ccf9597721db3ea56ebb44245648c26 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:27:29.595750  287189 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/proxy-client.key ...
	I1003 18:27:29.595765  287189 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/proxy-client.key: {Name:mk5144ea8327b4cdfd47e82293649e6d7693a18c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:27:29.595951  287189 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 18:27:29.595991  287189 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem (1082 bytes)
	I1003 18:27:29.596021  287189 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem (1123 bytes)
	I1003 18:27:29.596048  287189 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/key.pem (1675 bytes)
	I1003 18:27:29.596599  287189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 18:27:29.615451  287189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1003 18:27:29.632668  287189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 18:27:29.650366  287189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1003 18:27:29.668879  287189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1003 18:27:29.685560  287189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1003 18:27:29.702921  287189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 18:27:29.720259  287189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1003 18:27:29.737586  287189 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 18:27:29.754971  287189 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 18:27:29.767293  287189 ssh_runner.go:195] Run: openssl version
	I1003 18:27:29.773509  287189 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 18:27:29.781558  287189 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:27:29.785074  287189 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  3 18:27 /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:27:29.785131  287189 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:27:29.825944  287189 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 18:27:29.834046  287189 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 18:27:29.837521  287189 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1003 18:27:29.837623  287189 kubeadm.go:400] StartCluster: {Name:addons-952140 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-952140 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 18:27:29.837708  287189 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1003 18:27:29.837767  287189 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1003 18:27:29.864321  287189 cri.go:89] found id: ""
	I1003 18:27:29.864474  287189 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 18:27:29.872083  287189 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1003 18:27:29.879489  287189 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1003 18:27:29.879553  287189 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 18:27:29.886839  287189 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1003 18:27:29.886901  287189 kubeadm.go:157] found existing configuration files:
	
	I1003 18:27:29.886976  287189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1003 18:27:29.894441  287189 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1003 18:27:29.894505  287189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1003 18:27:29.901848  287189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1003 18:27:29.909210  287189 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1003 18:27:29.909288  287189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1003 18:27:29.916969  287189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1003 18:27:29.924904  287189 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1003 18:27:29.924990  287189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1003 18:27:29.932084  287189 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1003 18:27:29.939726  287189 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1003 18:27:29.939850  287189 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1003 18:27:29.947362  287189 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1003 18:27:29.997944  287189 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1003 18:27:29.998011  287189 kubeadm.go:318] [preflight] Running pre-flight checks
	I1003 18:27:30.074042  287189 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1003 18:27:30.074152  287189 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1003 18:27:30.074211  287189 kubeadm.go:318] OS: Linux
	I1003 18:27:30.074283  287189 kubeadm.go:318] CGROUPS_CPU: enabled
	I1003 18:27:30.074360  287189 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1003 18:27:30.074435  287189 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1003 18:27:30.074504  287189 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1003 18:27:30.074575  287189 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1003 18:27:30.074644  287189 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1003 18:27:30.074708  287189 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1003 18:27:30.074778  287189 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1003 18:27:30.074849  287189 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1003 18:27:30.158604  287189 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1003 18:27:30.158768  287189 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1003 18:27:30.158892  287189 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1003 18:27:30.169253  287189 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1003 18:27:30.173502  287189 out.go:252]   - Generating certificates and keys ...
	I1003 18:27:30.173611  287189 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1003 18:27:30.173709  287189 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1003 18:27:30.469446  287189 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1003 18:27:32.754604  287189 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1003 18:27:33.102636  287189 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1003 18:27:33.370237  287189 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1003 18:27:33.596294  287189 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1003 18:27:33.596672  287189 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-952140 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1003 18:27:33.980605  287189 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1003 18:27:33.981071  287189 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-952140 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1003 18:27:34.460225  287189 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1003 18:27:34.845280  287189 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1003 18:27:34.949650  287189 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1003 18:27:34.949975  287189 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1003 18:27:35.388926  287189 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1003 18:27:35.852554  287189 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1003 18:27:37.179116  287189 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1003 18:27:37.576875  287189 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1003 18:27:39.110661  287189 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1003 18:27:39.111595  287189 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1003 18:27:39.114447  287189 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1003 18:27:39.117721  287189 out.go:252]   - Booting up control plane ...
	I1003 18:27:39.117830  287189 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1003 18:27:39.125059  287189 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1003 18:27:39.126398  287189 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1003 18:27:39.148112  287189 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1003 18:27:39.148239  287189 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1003 18:27:39.155771  287189 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1003 18:27:39.156346  287189 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1003 18:27:39.156400  287189 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1003 18:27:39.285437  287189 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1003 18:27:39.285566  287189 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1003 18:27:39.797134  287189 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 513.549621ms
	I1003 18:27:39.797514  287189 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1003 18:27:39.797818  287189 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1003 18:27:39.798116  287189 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1003 18:27:39.798402  287189 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1003 18:27:41.975694  287189 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.176821071s
	I1003 18:27:44.060375  287189 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.261562115s
	I1003 18:27:45.800073  287189 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.00179073s
	I1003 18:27:45.831091  287189 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1003 18:27:45.849703  287189 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1003 18:27:45.865907  287189 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1003 18:27:45.866361  287189 kubeadm.go:318] [mark-control-plane] Marking the node addons-952140 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1003 18:27:45.879548  287189 kubeadm.go:318] [bootstrap-token] Using token: fbxqq7.5pacsqus63pybu4q
	I1003 18:27:45.882701  287189 out.go:252]   - Configuring RBAC rules ...
	I1003 18:27:45.882823  287189 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1003 18:27:45.900037  287189 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1003 18:27:45.910033  287189 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1003 18:27:45.919386  287189 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1003 18:27:45.932675  287189 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1003 18:27:45.957218  287189 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1003 18:27:46.208200  287189 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1003 18:27:46.652025  287189 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1003 18:27:47.209933  287189 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1003 18:27:47.211586  287189 kubeadm.go:318] 
	I1003 18:27:47.211662  287189 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1003 18:27:47.211670  287189 kubeadm.go:318] 
	I1003 18:27:47.211750  287189 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1003 18:27:47.211755  287189 kubeadm.go:318] 
	I1003 18:27:47.211781  287189 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1003 18:27:47.211843  287189 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1003 18:27:47.211902  287189 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1003 18:27:47.211915  287189 kubeadm.go:318] 
	I1003 18:27:47.211972  287189 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1003 18:27:47.211977  287189 kubeadm.go:318] 
	I1003 18:27:47.212027  287189 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1003 18:27:47.212031  287189 kubeadm.go:318] 
	I1003 18:27:47.212086  287189 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1003 18:27:47.212164  287189 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1003 18:27:47.212236  287189 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1003 18:27:47.212240  287189 kubeadm.go:318] 
	I1003 18:27:47.212328  287189 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1003 18:27:47.212409  287189 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1003 18:27:47.212414  287189 kubeadm.go:318] 
	I1003 18:27:47.212501  287189 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token fbxqq7.5pacsqus63pybu4q \
	I1003 18:27:47.212608  287189 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:f66ff31263aa4cda6b17caa2076838d6a1918275f1c2773b90b119c0d4a4d71a \
	I1003 18:27:47.212630  287189 kubeadm.go:318] 	--control-plane 
	I1003 18:27:47.212634  287189 kubeadm.go:318] 
	I1003 18:27:47.212735  287189 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1003 18:27:47.212741  287189 kubeadm.go:318] 
	I1003 18:27:47.212826  287189 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token fbxqq7.5pacsqus63pybu4q \
	I1003 18:27:47.212938  287189 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:f66ff31263aa4cda6b17caa2076838d6a1918275f1c2773b90b119c0d4a4d71a 
	I1003 18:27:47.216798  287189 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1003 18:27:47.217138  287189 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1003 18:27:47.217272  287189 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1003 18:27:47.217293  287189 cni.go:84] Creating CNI manager for ""
	I1003 18:27:47.217302  287189 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 18:27:47.220479  287189 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1003 18:27:47.223491  287189 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1003 18:27:47.227899  287189 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1003 18:27:47.227923  287189 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1003 18:27:47.242859  287189 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1003 18:27:47.530445  287189 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1003 18:27:47.530545  287189 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 18:27:47.530592  287189 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-952140 minikube.k8s.io/updated_at=2025_10_03T18_27_47_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=a43873c79fc22f8b1ccd29d3dfa635d392b09335 minikube.k8s.io/name=addons-952140 minikube.k8s.io/primary=true
	I1003 18:27:47.742326  287189 ops.go:34] apiserver oom_adj: -16
	I1003 18:27:47.742444  287189 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 18:27:48.242619  287189 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 18:27:48.742594  287189 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 18:27:49.242563  287189 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 18:27:49.743505  287189 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 18:27:50.242824  287189 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 18:27:50.743079  287189 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 18:27:51.242841  287189 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 18:27:51.742776  287189 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 18:27:51.884205  287189 kubeadm.go:1113] duration metric: took 4.353727174s to wait for elevateKubeSystemPrivileges
	I1003 18:27:51.884232  287189 kubeadm.go:402] duration metric: took 22.046612743s to StartCluster
	I1003 18:27:51.884248  287189 settings.go:142] acquiring lock: {Name:mkc95577dbc448e3409dfa2b5e53a3a1327cb451 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:27:51.884358  287189 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21625-284583/kubeconfig
	I1003 18:27:51.884806  287189 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/kubeconfig: {Name:mkc1323fd87f4a78231a26d2dab0dff7feecf1e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:27:51.885658  287189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1003 18:27:51.885955  287189 config.go:182] Loaded profile config "addons-952140": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:27:51.885770  287189 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1003 18:27:51.886049  287189 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1003 18:27:51.886131  287189 addons.go:69] Setting yakd=true in profile "addons-952140"
	I1003 18:27:51.886148  287189 addons.go:238] Setting addon yakd=true in "addons-952140"
	I1003 18:27:51.886169  287189 host.go:66] Checking if "addons-952140" exists ...
	I1003 18:27:51.886625  287189 cli_runner.go:164] Run: docker container inspect addons-952140 --format={{.State.Status}}
	I1003 18:27:51.887143  287189 addons.go:69] Setting metrics-server=true in profile "addons-952140"
	I1003 18:27:51.887178  287189 addons.go:238] Setting addon metrics-server=true in "addons-952140"
	I1003 18:27:51.887210  287189 host.go:66] Checking if "addons-952140" exists ...
	I1003 18:27:51.887613  287189 cli_runner.go:164] Run: docker container inspect addons-952140 --format={{.State.Status}}
	I1003 18:27:51.888255  287189 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-952140"
	I1003 18:27:51.891424  287189 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-952140"
	I1003 18:27:51.891508  287189 host.go:66] Checking if "addons-952140" exists ...
	I1003 18:27:51.891993  287189 cli_runner.go:164] Run: docker container inspect addons-952140 --format={{.State.Status}}
	I1003 18:27:51.894863  287189 out.go:179] * Verifying Kubernetes components...
	I1003 18:27:51.890113  287189 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-952140"
	I1003 18:27:51.898013  287189 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-952140"
	I1003 18:27:51.902820  287189 host.go:66] Checking if "addons-952140" exists ...
	I1003 18:27:51.903311  287189 cli_runner.go:164] Run: docker container inspect addons-952140 --format={{.State.Status}}
	I1003 18:27:51.890122  287189 addons.go:69] Setting registry=true in profile "addons-952140"
	I1003 18:27:51.903723  287189 addons.go:238] Setting addon registry=true in "addons-952140"
	I1003 18:27:51.903751  287189 host.go:66] Checking if "addons-952140" exists ...
	I1003 18:27:51.904153  287189 cli_runner.go:164] Run: docker container inspect addons-952140 --format={{.State.Status}}
	I1003 18:27:51.908395  287189 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:27:51.890136  287189 addons.go:69] Setting registry-creds=true in profile "addons-952140"
	I1003 18:27:51.908549  287189 addons.go:238] Setting addon registry-creds=true in "addons-952140"
	I1003 18:27:51.908592  287189 host.go:66] Checking if "addons-952140" exists ...
	I1003 18:27:51.909212  287189 cli_runner.go:164] Run: docker container inspect addons-952140 --format={{.State.Status}}
	I1003 18:27:51.890143  287189 addons.go:69] Setting storage-provisioner=true in profile "addons-952140"
	I1003 18:27:51.927188  287189 addons.go:238] Setting addon storage-provisioner=true in "addons-952140"
	I1003 18:27:51.927240  287189 host.go:66] Checking if "addons-952140" exists ...
	I1003 18:27:51.927699  287189 cli_runner.go:164] Run: docker container inspect addons-952140 --format={{.State.Status}}
	I1003 18:27:51.890149  287189 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-952140"
	I1003 18:27:51.935395  287189 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-952140"
	I1003 18:27:51.935763  287189 cli_runner.go:164] Run: docker container inspect addons-952140 --format={{.State.Status}}
	I1003 18:27:51.890155  287189 addons.go:69] Setting volcano=true in profile "addons-952140"
	I1003 18:27:51.969696  287189 addons.go:238] Setting addon volcano=true in "addons-952140"
	I1003 18:27:51.969938  287189 host.go:66] Checking if "addons-952140" exists ...
	I1003 18:27:51.972839  287189 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1003 18:27:51.972995  287189 cli_runner.go:164] Run: docker container inspect addons-952140 --format={{.State.Status}}
	I1003 18:27:51.890259  287189 addons.go:69] Setting volumesnapshots=true in profile "addons-952140"
	I1003 18:27:51.890308  287189 addons.go:69] Setting ingress=true in profile "addons-952140"
	I1003 18:27:51.890312  287189 addons.go:69] Setting cloud-spanner=true in profile "addons-952140"
	I1003 18:27:51.890316  287189 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-952140"
	I1003 18:27:51.890418  287189 addons.go:69] Setting default-storageclass=true in profile "addons-952140"
	I1003 18:27:51.890426  287189 addons.go:69] Setting gcp-auth=true in profile "addons-952140"
	I1003 18:27:51.890433  287189 addons.go:69] Setting inspektor-gadget=true in profile "addons-952140"
	I1003 18:27:51.890439  287189 addons.go:69] Setting ingress-dns=true in profile "addons-952140"
	I1003 18:27:51.976824  287189 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1003 18:27:51.987378  287189 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1003 18:27:51.994747  287189 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1003 18:27:51.994856  287189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952140
	I1003 18:27:51.996592  287189 addons.go:238] Setting addon volumesnapshots=true in "addons-952140"
	I1003 18:27:51.996700  287189 host.go:66] Checking if "addons-952140" exists ...
	I1003 18:27:52.003447  287189 cli_runner.go:164] Run: docker container inspect addons-952140 --format={{.State.Status}}
	I1003 18:27:52.018177  287189 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1003 18:27:52.018262  287189 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1003 18:27:52.018343  287189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952140
	I1003 18:27:52.032572  287189 addons.go:238] Setting addon ingress=true in "addons-952140"
	I1003 18:27:52.032678  287189 host.go:66] Checking if "addons-952140" exists ...
	I1003 18:27:52.033217  287189 cli_runner.go:164] Run: docker container inspect addons-952140 --format={{.State.Status}}
	I1003 18:27:52.052853  287189 addons.go:238] Setting addon cloud-spanner=true in "addons-952140"
	I1003 18:27:52.052963  287189 host.go:66] Checking if "addons-952140" exists ...
	I1003 18:27:52.053519  287189 cli_runner.go:164] Run: docker container inspect addons-952140 --format={{.State.Status}}
	I1003 18:27:52.067832  287189 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-952140"
	I1003 18:27:52.067884  287189 host.go:66] Checking if "addons-952140" exists ...
	I1003 18:27:52.068341  287189 cli_runner.go:164] Run: docker container inspect addons-952140 --format={{.State.Status}}
	I1003 18:27:52.080977  287189 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-952140"
	I1003 18:27:52.081337  287189 cli_runner.go:164] Run: docker container inspect addons-952140 --format={{.State.Status}}
	I1003 18:27:52.100077  287189 mustload.go:65] Loading cluster: addons-952140
	I1003 18:27:52.100303  287189 config.go:182] Loaded profile config "addons-952140": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:27:52.100560  287189 cli_runner.go:164] Run: docker container inspect addons-952140 --format={{.State.Status}}
	I1003 18:27:52.103824  287189 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1003 18:27:52.114286  287189 addons.go:238] Setting addon inspektor-gadget=true in "addons-952140"
	I1003 18:27:52.114342  287189 host.go:66] Checking if "addons-952140" exists ...
	I1003 18:27:52.114830  287189 cli_runner.go:164] Run: docker container inspect addons-952140 --format={{.State.Status}}
	I1003 18:27:52.127892  287189 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1003 18:27:52.132090  287189 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1003 18:27:52.132129  287189 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1003 18:27:52.132192  287189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952140
	I1003 18:27:52.132895  287189 addons.go:238] Setting addon ingress-dns=true in "addons-952140"
	I1003 18:27:52.132954  287189 host.go:66] Checking if "addons-952140" exists ...
	I1003 18:27:52.133404  287189 cli_runner.go:164] Run: docker container inspect addons-952140 --format={{.State.Status}}
	I1003 18:27:52.154631  287189 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 18:27:52.155777  287189 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1003 18:27:52.163499  287189 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1003 18:27:52.163524  287189 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1003 18:27:52.163594  287189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952140
	I1003 18:27:52.164296  287189 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1003 18:27:52.164346  287189 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1003 18:27:52.164442  287189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952140
	I1003 18:27:52.207505  287189 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:27:52.207525  287189 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1003 18:27:52.207601  287189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952140
	I1003 18:27:52.242450  287189 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-952140"
	I1003 18:27:52.242489  287189 host.go:66] Checking if "addons-952140" exists ...
	I1003 18:27:52.242928  287189 cli_runner.go:164] Run: docker container inspect addons-952140 --format={{.State.Status}}
	I1003 18:27:52.292370  287189 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1003 18:27:52.306928  287189 out.go:179]   - Using image docker.io/registry:3.0.0
	I1003 18:27:52.312840  287189 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1003 18:27:52.312866  287189 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1003 18:27:52.312939  287189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952140
	W1003 18:27:52.323928  287189 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1003 18:27:52.324206  287189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/addons-952140/id_rsa Username:docker}
	I1003 18:27:52.325486  287189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/addons-952140/id_rsa Username:docker}
	I1003 18:27:52.348965  287189 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1003 18:27:52.350861  287189 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1003 18:27:52.350873  287189 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.41
	I1003 18:27:52.351954  287189 host.go:66] Checking if "addons-952140" exists ...
	I1003 18:27:52.380952  287189 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1003 18:27:52.380979  287189 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1003 18:27:52.381048  287189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952140
	I1003 18:27:52.381234  287189 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1003 18:27:52.381243  287189 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1003 18:27:52.381278  287189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952140
	I1003 18:27:52.396557  287189 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.1
	I1003 18:27:52.402774  287189 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1003 18:27:52.402799  287189 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1003 18:27:52.402875  287189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952140
	I1003 18:27:52.415824  287189 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1003 18:27:52.420777  287189 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1003 18:27:52.422999  287189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/addons-952140/id_rsa Username:docker}
	I1003 18:27:52.424533  287189 addons.go:238] Setting addon default-storageclass=true in "addons-952140"
	I1003 18:27:52.424573  287189 host.go:66] Checking if "addons-952140" exists ...
	I1003 18:27:52.425055  287189 cli_runner.go:164] Run: docker container inspect addons-952140 --format={{.State.Status}}
	I1003 18:27:52.433153  287189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/addons-952140/id_rsa Username:docker}
	I1003 18:27:52.433879  287189 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1003 18:27:52.434253  287189 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1003 18:27:52.434268  287189 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1003 18:27:52.434322  287189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952140
	I1003 18:27:52.442013  287189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/addons-952140/id_rsa Username:docker}
	I1003 18:27:52.443911  287189 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1003 18:27:52.444086  287189 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I1003 18:27:52.447971  287189 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1003 18:27:52.451623  287189 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1003 18:27:52.454684  287189 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1003 18:27:52.456916  287189 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1003 18:27:52.460168  287189 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1003 18:27:52.460174  287189 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1003 18:27:52.465012  287189 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1003 18:27:52.465038  287189 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1003 18:27:52.465112  287189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952140
	I1003 18:27:52.465434  287189 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1003 18:27:52.465471  287189 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1003 18:27:52.465545  287189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952140
	I1003 18:27:52.517675  287189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/addons-952140/id_rsa Username:docker}
	I1003 18:27:52.519654  287189 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1003 18:27:52.523358  287189 out.go:179]   - Using image docker.io/busybox:stable
	I1003 18:27:52.526904  287189 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1003 18:27:52.526927  287189 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1003 18:27:52.526999  287189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952140
	I1003 18:27:52.572059  287189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/addons-952140/id_rsa Username:docker}
	I1003 18:27:52.643523  287189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/addons-952140/id_rsa Username:docker}
	I1003 18:27:52.643955  287189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/addons-952140/id_rsa Username:docker}
	I1003 18:27:52.652045  287189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/addons-952140/id_rsa Username:docker}
	I1003 18:27:52.658421  287189 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1003 18:27:52.658441  287189 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1003 18:27:52.658502  287189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952140
	I1003 18:27:52.666270  287189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/addons-952140/id_rsa Username:docker}
	I1003 18:27:52.668267  287189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/addons-952140/id_rsa Username:docker}
	I1003 18:27:52.670682  287189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/addons-952140/id_rsa Username:docker}
	I1003 18:27:52.686351  287189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/addons-952140/id_rsa Username:docker}
	I1003 18:27:52.697005  287189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/addons-952140/id_rsa Username:docker}
	W1003 18:27:52.698237  287189 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1003 18:27:52.698274  287189 retry.go:31] will retry after 270.649643ms: ssh: handshake failed: EOF
	I1003 18:27:52.810020  287189 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1003 18:27:52.810207  287189 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 18:27:53.027037  287189 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1003 18:27:53.027113  287189 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1003 18:27:53.075653  287189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1003 18:27:53.097995  287189 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1003 18:27:53.098068  287189 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1003 18:27:53.116878  287189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1003 18:27:53.144066  287189 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1003 18:27:53.144092  287189 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1003 18:27:53.153703  287189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:27:53.185518  287189 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1003 18:27:53.185591  287189 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1003 18:27:53.246312  287189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1003 18:27:53.253600  287189 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1003 18:27:53.253672  287189 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1003 18:27:53.302188  287189 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1003 18:27:53.302267  287189 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1003 18:27:53.313101  287189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1003 18:27:53.316534  287189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1003 18:27:53.330656  287189 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1003 18:27:53.330736  287189 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1003 18:27:53.342190  287189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1003 18:27:53.354262  287189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1003 18:27:53.360647  287189 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1003 18:27:53.360719  287189 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1003 18:27:53.400438  287189 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1003 18:27:53.400524  287189 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1003 18:27:53.445494  287189 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1003 18:27:53.445565  287189 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1003 18:27:53.477988  287189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1003 18:27:53.486360  287189 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1003 18:27:53.486437  287189 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1003 18:27:53.505686  287189 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1003 18:27:53.505763  287189 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1003 18:27:53.542487  287189 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1003 18:27:53.542563  287189 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1003 18:27:53.581967  287189 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1003 18:27:53.582048  287189 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1003 18:27:53.596563  287189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1003 18:27:53.676429  287189 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1003 18:27:53.676519  287189 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1003 18:27:53.681971  287189 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1003 18:27:53.682050  287189 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1003 18:27:53.716867  287189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:27:53.720430  287189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1003 18:27:53.782430  287189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1003 18:27:53.829023  287189 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1003 18:27:53.829048  287189 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1003 18:27:53.848637  287189 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1003 18:27:53.848659  287189 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1003 18:27:54.073054  287189 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1003 18:27:54.073124  287189 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1003 18:27:54.079394  287189 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1003 18:27:54.079474  287189 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1003 18:27:54.241287  287189 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1003 18:27:54.241358  287189 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1003 18:27:54.389742  287189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1003 18:27:54.439690  287189 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1003 18:27:54.439770  287189 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1003 18:27:54.507319  287189 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.69706511s)
	I1003 18:27:54.507569  287189 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.697471311s)
	I1003 18:27:54.507607  287189 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1003 18:27:54.508964  287189 node_ready.go:35] waiting up to 6m0s for node "addons-952140" to be "Ready" ...
	I1003 18:27:54.544022  287189 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1003 18:27:54.544098  287189 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1003 18:27:54.690492  287189 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.573578587s)
	I1003 18:27:54.690651  287189 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.614899295s)
	I1003 18:27:54.782010  287189 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1003 18:27:54.782082  287189 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1003 18:27:54.919707  287189 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1003 18:27:54.919793  287189 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1003 18:27:55.014540  287189 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-952140" context rescaled to 1 replicas
	I1003 18:27:55.165958  287189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1003 18:27:55.173633  287189 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.019846518s)
	I1003 18:27:56.258004  287189 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.944798296s)
	I1003 18:27:56.258108  287189 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.011726106s)
	W1003 18:27:56.540628  287189 node_ready.go:57] node "addons-952140" has "Ready":"False" status (will retry)
	I1003 18:27:57.014215  287189 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.697599318s)
	I1003 18:27:57.014553  287189 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.672255393s)
	I1003 18:27:58.008743  287189 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.65437879s)
	I1003 18:27:58.008774  287189 addons.go:479] Verifying addon ingress=true in "addons-952140"
	I1003 18:27:58.008992  287189 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.530928226s)
	I1003 18:27:58.009016  287189 addons.go:479] Verifying addon registry=true in "addons-952140"
	I1003 18:27:58.009463  287189 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.412822251s)
	I1003 18:27:58.009520  287189 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.289012177s)
	W1003 18:27:58.009536  287189 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:27:58.009606  287189 retry.go:31] will retry after 300.387324ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:27:58.009627  287189 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.227173637s)
	I1003 18:27:58.011292  287189 addons.go:479] Verifying addon metrics-server=true in "addons-952140"
	I1003 18:27:58.009700  287189 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.619882084s)
	W1003 18:27:58.011342  287189 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1003 18:27:58.011358  287189 retry.go:31] will retry after 355.343341ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1003 18:27:58.009483  287189 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.292531229s)
	I1003 18:27:58.012129  287189 out.go:179] * Verifying ingress addon...
	I1003 18:27:58.012267  287189 out.go:179] * Verifying registry addon...
	I1003 18:27:58.014455  287189 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-952140 service yakd-dashboard -n yakd-dashboard
	
	I1003 18:27:58.017905  287189 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1003 18:27:58.018864  287189 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1003 18:27:58.027930  287189 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1003 18:27:58.027952  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:27:58.028082  287189 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1003 18:27:58.028088  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:27:58.275135  287189 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.109081492s)
	I1003 18:27:58.275177  287189 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-952140"
	I1003 18:27:58.280306  287189 out.go:179] * Verifying csi-hostpath-driver addon...
	I1003 18:27:58.283872  287189 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1003 18:27:58.288693  287189 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1003 18:27:58.288717  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:27:58.311051  287189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1003 18:27:58.367593  287189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1003 18:27:58.522388  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:27:58.522818  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:27:58.794795  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1003 18:27:59.012799  287189 node_ready.go:57] node "addons-952140" has "Ready":"False" status (will retry)
	I1003 18:27:59.022519  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:27:59.022728  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1003 18:27:59.261341  287189 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:27:59.261370  287189 retry.go:31] will retry after 265.416503ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:27:59.288769  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:27:59.521994  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:27:59.522302  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:27:59.527367  287189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1003 18:27:59.787667  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:27:59.991207  287189 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1003 18:27:59.991329  287189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952140
	I1003 18:28:00.081208  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:00.081618  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:00.082836  287189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/addons-952140/id_rsa Username:docker}
	I1003 18:28:00.307615  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:00.362565  287189 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1003 18:28:00.412284  287189 addons.go:238] Setting addon gcp-auth=true in "addons-952140"
	I1003 18:28:00.412425  287189 host.go:66] Checking if "addons-952140" exists ...
	I1003 18:28:00.413052  287189 cli_runner.go:164] Run: docker container inspect addons-952140 --format={{.State.Status}}
	I1003 18:28:00.479093  287189 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1003 18:28:00.479172  287189 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952140
	I1003 18:28:00.517149  287189 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/addons-952140/id_rsa Username:docker}
	I1003 18:28:00.524391  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:00.525996  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:00.787202  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:01.022920  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:01.023632  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:01.143602  287189 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.775960858s)
	I1003 18:28:01.143693  287189 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.616295151s)
	W1003 18:28:01.143722  287189 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:28:01.143740  287189 retry.go:31] will retry after 481.74906ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:28:01.147010  287189 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I1003 18:28:01.149975  287189 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1003 18:28:01.152817  287189 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1003 18:28:01.152846  287189 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1003 18:28:01.167943  287189 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1003 18:28:01.168011  287189 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1003 18:28:01.182137  287189 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1003 18:28:01.182163  287189 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1003 18:28:01.205064  287189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1003 18:28:01.288069  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1003 18:28:01.512338  287189 node_ready.go:57] node "addons-952140" has "Ready":"False" status (will retry)
	I1003 18:28:01.522804  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:01.524238  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:01.626453  287189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1003 18:28:01.736565  287189 addons.go:479] Verifying addon gcp-auth=true in "addons-952140"
	I1003 18:28:01.739884  287189 out.go:179] * Verifying gcp-auth addon...
	I1003 18:28:01.743860  287189 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1003 18:28:01.783528  287189 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1003 18:28:01.783554  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:01.792100  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:02.023535  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:02.023928  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:02.247467  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:02.287564  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:02.522950  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:02.523726  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1003 18:28:02.563938  287189 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:28:02.563986  287189 retry.go:31] will retry after 1.197531103s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:28:02.746958  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:02.786954  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:03.022082  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:03.022166  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:03.247305  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:03.287190  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:03.521321  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:03.521659  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:03.747037  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:03.762140  287189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1003 18:28:03.787851  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1003 18:28:04.013196  287189 node_ready.go:57] node "addons-952140" has "Ready":"False" status (will retry)
	I1003 18:28:04.022596  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:04.023648  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:04.249580  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:04.287741  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:04.522598  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:04.523334  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1003 18:28:04.572879  287189 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:28:04.572961  287189 retry.go:31] will retry after 1.579380909s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:28:04.746919  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:04.786871  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:05.021912  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:05.023447  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:05.247611  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:05.287483  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:05.521370  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:05.522304  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:05.747734  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:05.787451  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:06.022193  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:06.023199  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:06.152500  287189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1003 18:28:06.247685  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:06.287848  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1003 18:28:06.511877  287189 node_ready.go:57] node "addons-952140" has "Ready":"False" status (will retry)
	I1003 18:28:06.523485  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:06.524290  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:06.747336  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:06.787889  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1003 18:28:06.982775  287189 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:28:06.982870  287189 retry.go:31] will retry after 1.448783473s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:28:07.021477  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:07.021756  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:07.248226  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:07.287040  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:07.521169  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:07.522379  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:07.747842  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:07.787511  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:08.022213  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:08.022503  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:08.247598  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:08.286629  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:08.432873  287189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1003 18:28:08.512202  287189 node_ready.go:57] node "addons-952140" has "Ready":"False" status (will retry)
	I1003 18:28:08.521945  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:08.522780  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:08.746769  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:08.788572  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:09.023935  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:09.024029  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:09.247644  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1003 18:28:09.256162  287189 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:28:09.256196  287189 retry.go:31] will retry after 1.878991162s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:28:09.287260  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:09.521095  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:09.522355  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:09.748006  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:09.786558  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:10.022182  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:10.022409  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:10.247601  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:10.287529  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1003 18:28:10.512560  287189 node_ready.go:57] node "addons-952140" has "Ready":"False" status (will retry)
	I1003 18:28:10.521839  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:10.522330  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:10.747093  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:10.786717  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:11.022113  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:11.022177  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:11.135449  287189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1003 18:28:11.246908  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:11.287886  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:11.522493  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:11.523031  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:11.747872  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:11.787524  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1003 18:28:11.961235  287189 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:28:11.961269  287189 retry.go:31] will retry after 2.162467062s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:28:12.021606  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:12.021746  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:12.248279  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:12.286956  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:12.522126  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:12.521778  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:12.747785  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:12.787401  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1003 18:28:13.012118  287189 node_ready.go:57] node "addons-952140" has "Ready":"False" status (will retry)
	I1003 18:28:13.022176  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:13.022494  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:13.247698  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:13.286625  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:13.522247  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:13.522393  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:13.747718  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:13.787664  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:14.022341  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:14.022427  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:14.124780  287189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1003 18:28:14.247152  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:14.287673  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:14.522946  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:14.523503  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:14.747251  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:14.787613  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1003 18:28:14.947360  287189 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:28:14.947434  287189 retry.go:31] will retry after 3.350178966s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:28:15.013492  287189 node_ready.go:57] node "addons-952140" has "Ready":"False" status (will retry)
	I1003 18:28:15.022130  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:15.023627  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:15.246449  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:15.287396  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:15.521130  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:15.522855  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:15.747470  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:15.787235  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:16.022338  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:16.022722  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:16.247702  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:16.288097  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:16.521936  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:16.522000  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:16.746878  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:16.786693  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:17.022216  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:17.022370  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:17.247282  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:17.287172  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1003 18:28:17.512420  287189 node_ready.go:57] node "addons-952140" has "Ready":"False" status (will retry)
	I1003 18:28:17.522147  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:17.523254  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:17.748201  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:17.786715  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:18.022026  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:18.022243  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:18.247170  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:18.287108  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:18.298223  287189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1003 18:28:18.522459  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:18.523036  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:18.747133  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:18.787554  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:19.021580  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:19.024153  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1003 18:28:19.082657  287189 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:28:19.082690  287189 retry.go:31] will retry after 8.00452608s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:28:19.247467  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:19.287265  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:19.521832  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:19.521994  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:19.747298  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:19.787000  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1003 18:28:20.012225  287189 node_ready.go:57] node "addons-952140" has "Ready":"False" status (will retry)
	I1003 18:28:20.021587  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:20.023010  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:20.247027  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:20.286731  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:20.522010  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:20.522059  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:20.746778  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:20.787870  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:21.021560  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:21.022468  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:21.247878  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:21.287797  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:21.521383  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:21.522209  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:21.748428  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:21.787587  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1003 18:28:22.012791  287189 node_ready.go:57] node "addons-952140" has "Ready":"False" status (will retry)
	I1003 18:28:22.021871  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:22.022012  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:22.246970  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:22.287845  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:22.521771  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:22.522135  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:22.746808  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:22.786793  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:23.021210  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:23.022927  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:23.246636  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:23.287512  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:23.521736  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:23.521933  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:23.747604  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:23.787498  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:24.021216  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:24.022623  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:24.246624  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:24.287639  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1003 18:28:24.512705  287189 node_ready.go:57] node "addons-952140" has "Ready":"False" status (will retry)
	I1003 18:28:24.521976  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:24.522132  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:24.747212  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:24.787009  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:25.021397  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:25.021642  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:25.246783  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:25.287413  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:25.521179  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:25.522394  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:25.747653  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:25.787543  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:26.022154  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:26.022221  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:26.247260  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:26.286891  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1003 18:28:26.512774  287189 node_ready.go:57] node "addons-952140" has "Ready":"False" status (will retry)
	I1003 18:28:26.521875  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:26.521993  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:26.747055  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:26.786713  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:27.021725  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:27.021787  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:27.087733  287189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1003 18:28:27.246974  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:27.287233  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:27.525261  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:27.525460  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:27.747525  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:27.787693  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1003 18:28:27.906544  287189 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:28:27.906600  287189 retry.go:31] will retry after 20.407055858s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:28:28.022271  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:28.022356  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:28.247199  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:28.287552  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:28.522050  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:28.522140  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:28.746832  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:28.787925  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1003 18:28:29.012969  287189 node_ready.go:57] node "addons-952140" has "Ready":"False" status (will retry)
	I1003 18:28:29.020558  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:29.021863  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:29.246833  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:29.287871  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:29.521888  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:29.522204  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:29.746900  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:29.787618  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:30.031510  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:30.031303  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:30.247699  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:30.287420  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:30.521988  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:30.522000  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:30.747051  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:30.787512  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:31.021884  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:31.022486  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:31.247471  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:31.287075  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1003 18:28:31.511903  287189 node_ready.go:57] node "addons-952140" has "Ready":"False" status (will retry)
	I1003 18:28:31.522171  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:31.522282  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:31.747308  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:31.787167  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:32.021610  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:32.022085  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:32.247016  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:32.286673  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:32.521875  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:32.522025  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:32.747191  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:32.787988  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:33.028936  287189 node_ready.go:49] node "addons-952140" is "Ready"
	I1003 18:28:33.028976  287189 node_ready.go:38] duration metric: took 38.519936489s for node "addons-952140" to be "Ready" ...
	I1003 18:28:33.028991  287189 api_server.go:52] waiting for apiserver process to appear ...
	I1003 18:28:33.029089  287189 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:28:33.031392  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:33.031835  287189 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1003 18:28:33.031854  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:33.044897  287189 api_server.go:72] duration metric: took 41.158104919s to wait for apiserver process to appear ...
	I1003 18:28:33.044941  287189 api_server.go:88] waiting for apiserver healthz status ...
	I1003 18:28:33.044979  287189 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1003 18:28:33.059321  287189 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1003 18:28:33.067290  287189 api_server.go:141] control plane version: v1.34.1
	I1003 18:28:33.067327  287189 api_server.go:131] duration metric: took 22.377923ms to wait for apiserver health ...
	I1003 18:28:33.067336  287189 system_pods.go:43] waiting for kube-system pods to appear ...
	I1003 18:28:33.147794  287189 system_pods.go:59] 19 kube-system pods found
	I1003 18:28:33.147843  287189 system_pods.go:61] "coredns-66bc5c9577-2hhqm" [daea3b45-b31f-453a-80f5-c30f7fce4122] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1003 18:28:33.147850  287189 system_pods.go:61] "csi-hostpath-attacher-0" [376ecb21-1ca4-4f77-bac5-a4b5af7ccfdd] Pending
	I1003 18:28:33.147879  287189 system_pods.go:61] "csi-hostpath-resizer-0" [8450e23b-d7f0-4b50-a20c-a7fc38411191] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1003 18:28:33.147905  287189 system_pods.go:61] "csi-hostpathplugin-vsbgb" [e6597406-4522-46da-ad41-da01126918f9] Pending
	I1003 18:28:33.147918  287189 system_pods.go:61] "etcd-addons-952140" [6c2991f4-ee56-4fbb-8f55-bf86ce3c8bc3] Running
	I1003 18:28:33.147924  287189 system_pods.go:61] "kindnet-vx5lb" [39f3102d-aa3a-4a72-b884-4fcf57878faf] Running
	I1003 18:28:33.147937  287189 system_pods.go:61] "kube-apiserver-addons-952140" [70a4748f-eedd-41aa-8ade-b8d13f6c85fe] Running
	I1003 18:28:33.147943  287189 system_pods.go:61] "kube-controller-manager-addons-952140" [7710c628-50b2-44d1-9faa-7ba463e404c9] Running
	I1003 18:28:33.147948  287189 system_pods.go:61] "kube-ingress-dns-minikube" [fbc268d3-be63-48bd-a93c-f3466f7458ed] Pending
	I1003 18:28:33.147952  287189 system_pods.go:61] "kube-proxy-5hd7r" [674b4e86-cafa-4e3f-8b57-719de4a646f5] Running
	I1003 18:28:33.147962  287189 system_pods.go:61] "kube-scheduler-addons-952140" [ef6d468e-24f4-474f-adeb-1d9e9cf74c87] Running
	I1003 18:28:33.147988  287189 system_pods.go:61] "metrics-server-85b7d694d7-tscmk" [51883ecf-f53c-4001-af25-5785ed3fa7db] Pending
	I1003 18:28:33.147994  287189 system_pods.go:61] "nvidia-device-plugin-daemonset-84v2d" [c0869084-f969-40cf-8475-57eedeb02a93] Pending
	I1003 18:28:33.148009  287189 system_pods.go:61] "registry-66898fdd98-88sgc" [749ffc38-9d67-4777-b96d-422ce39f2b46] Pending
	I1003 18:28:33.148021  287189 system_pods.go:61] "registry-creds-764b6fb674-dqntl" [57dce88b-cd6c-4f39-babf-2079e2174e05] Pending
	I1003 18:28:33.148027  287189 system_pods.go:61] "registry-proxy-4nwwr" [5ad2d6c8-13b3-4729-a243-b2881c6c7d2b] Pending
	I1003 18:28:33.148036  287189 system_pods.go:61] "snapshot-controller-7d9fbc56b8-ct6ht" [90ef8c16-dc3b-446a-b290-7b60cc11a9de] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1003 18:28:33.148054  287189 system_pods.go:61] "snapshot-controller-7d9fbc56b8-k5rg9" [1c8643cd-15e1-4798-916a-253affe08a69] Pending
	I1003 18:28:33.148065  287189 system_pods.go:61] "storage-provisioner" [7632d49f-2ddc-429b-a88b-02e68f1b42e3] Pending
	I1003 18:28:33.148071  287189 system_pods.go:74] duration metric: took 80.729026ms to wait for pod list to return data ...
	I1003 18:28:33.148096  287189 default_sa.go:34] waiting for default service account to be created ...
	I1003 18:28:33.191788  287189 default_sa.go:45] found service account: "default"
	I1003 18:28:33.191824  287189 default_sa.go:55] duration metric: took 43.720773ms for default service account to be created ...
	I1003 18:28:33.191835  287189 system_pods.go:116] waiting for k8s-apps to be running ...
	I1003 18:28:33.329884  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:33.330213  287189 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1003 18:28:33.330240  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:33.330792  287189 system_pods.go:86] 19 kube-system pods found
	I1003 18:28:33.330821  287189 system_pods.go:89] "coredns-66bc5c9577-2hhqm" [daea3b45-b31f-453a-80f5-c30f7fce4122] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1003 18:28:33.330828  287189 system_pods.go:89] "csi-hostpath-attacher-0" [376ecb21-1ca4-4f77-bac5-a4b5af7ccfdd] Pending
	I1003 18:28:33.330843  287189 system_pods.go:89] "csi-hostpath-resizer-0" [8450e23b-d7f0-4b50-a20c-a7fc38411191] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1003 18:28:33.330852  287189 system_pods.go:89] "csi-hostpathplugin-vsbgb" [e6597406-4522-46da-ad41-da01126918f9] Pending
	I1003 18:28:33.330859  287189 system_pods.go:89] "etcd-addons-952140" [6c2991f4-ee56-4fbb-8f55-bf86ce3c8bc3] Running
	I1003 18:28:33.330863  287189 system_pods.go:89] "kindnet-vx5lb" [39f3102d-aa3a-4a72-b884-4fcf57878faf] Running
	I1003 18:28:33.330867  287189 system_pods.go:89] "kube-apiserver-addons-952140" [70a4748f-eedd-41aa-8ade-b8d13f6c85fe] Running
	I1003 18:28:33.330872  287189 system_pods.go:89] "kube-controller-manager-addons-952140" [7710c628-50b2-44d1-9faa-7ba463e404c9] Running
	I1003 18:28:33.330883  287189 system_pods.go:89] "kube-ingress-dns-minikube" [fbc268d3-be63-48bd-a93c-f3466f7458ed] Pending
	I1003 18:28:33.330887  287189 system_pods.go:89] "kube-proxy-5hd7r" [674b4e86-cafa-4e3f-8b57-719de4a646f5] Running
	I1003 18:28:33.330891  287189 system_pods.go:89] "kube-scheduler-addons-952140" [ef6d468e-24f4-474f-adeb-1d9e9cf74c87] Running
	I1003 18:28:33.330895  287189 system_pods.go:89] "metrics-server-85b7d694d7-tscmk" [51883ecf-f53c-4001-af25-5785ed3fa7db] Pending
	I1003 18:28:33.330905  287189 system_pods.go:89] "nvidia-device-plugin-daemonset-84v2d" [c0869084-f969-40cf-8475-57eedeb02a93] Pending
	I1003 18:28:33.330909  287189 system_pods.go:89] "registry-66898fdd98-88sgc" [749ffc38-9d67-4777-b96d-422ce39f2b46] Pending
	I1003 18:28:33.330920  287189 system_pods.go:89] "registry-creds-764b6fb674-dqntl" [57dce88b-cd6c-4f39-babf-2079e2174e05] Pending
	I1003 18:28:33.330925  287189 system_pods.go:89] "registry-proxy-4nwwr" [5ad2d6c8-13b3-4729-a243-b2881c6c7d2b] Pending
	I1003 18:28:33.330934  287189 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ct6ht" [90ef8c16-dc3b-446a-b290-7b60cc11a9de] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1003 18:28:33.330938  287189 system_pods.go:89] "snapshot-controller-7d9fbc56b8-k5rg9" [1c8643cd-15e1-4798-916a-253affe08a69] Pending
	I1003 18:28:33.330944  287189 system_pods.go:89] "storage-provisioner" [7632d49f-2ddc-429b-a88b-02e68f1b42e3] Pending
	I1003 18:28:33.330958  287189 retry.go:31] will retry after 207.53529ms: missing components: kube-dns
	I1003 18:28:33.525198  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:33.525665  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:33.554907  287189 system_pods.go:86] 19 kube-system pods found
	I1003 18:28:33.554969  287189 system_pods.go:89] "coredns-66bc5c9577-2hhqm" [daea3b45-b31f-453a-80f5-c30f7fce4122] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1003 18:28:33.554981  287189 system_pods.go:89] "csi-hostpath-attacher-0" [376ecb21-1ca4-4f77-bac5-a4b5af7ccfdd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1003 18:28:33.554990  287189 system_pods.go:89] "csi-hostpath-resizer-0" [8450e23b-d7f0-4b50-a20c-a7fc38411191] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1003 18:28:33.555003  287189 system_pods.go:89] "csi-hostpathplugin-vsbgb" [e6597406-4522-46da-ad41-da01126918f9] Pending
	I1003 18:28:33.555026  287189 system_pods.go:89] "etcd-addons-952140" [6c2991f4-ee56-4fbb-8f55-bf86ce3c8bc3] Running
	I1003 18:28:33.555032  287189 system_pods.go:89] "kindnet-vx5lb" [39f3102d-aa3a-4a72-b884-4fcf57878faf] Running
	I1003 18:28:33.555042  287189 system_pods.go:89] "kube-apiserver-addons-952140" [70a4748f-eedd-41aa-8ade-b8d13f6c85fe] Running
	I1003 18:28:33.555047  287189 system_pods.go:89] "kube-controller-manager-addons-952140" [7710c628-50b2-44d1-9faa-7ba463e404c9] Running
	I1003 18:28:33.555063  287189 system_pods.go:89] "kube-ingress-dns-minikube" [fbc268d3-be63-48bd-a93c-f3466f7458ed] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1003 18:28:33.555079  287189 system_pods.go:89] "kube-proxy-5hd7r" [674b4e86-cafa-4e3f-8b57-719de4a646f5] Running
	I1003 18:28:33.555085  287189 system_pods.go:89] "kube-scheduler-addons-952140" [ef6d468e-24f4-474f-adeb-1d9e9cf74c87] Running
	I1003 18:28:33.555102  287189 system_pods.go:89] "metrics-server-85b7d694d7-tscmk" [51883ecf-f53c-4001-af25-5785ed3fa7db] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1003 18:28:33.555118  287189 system_pods.go:89] "nvidia-device-plugin-daemonset-84v2d" [c0869084-f969-40cf-8475-57eedeb02a93] Pending
	I1003 18:28:33.555132  287189 system_pods.go:89] "registry-66898fdd98-88sgc" [749ffc38-9d67-4777-b96d-422ce39f2b46] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1003 18:28:33.555141  287189 system_pods.go:89] "registry-creds-764b6fb674-dqntl" [57dce88b-cd6c-4f39-babf-2079e2174e05] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1003 18:28:33.555157  287189 system_pods.go:89] "registry-proxy-4nwwr" [5ad2d6c8-13b3-4729-a243-b2881c6c7d2b] Pending
	I1003 18:28:33.555164  287189 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ct6ht" [90ef8c16-dc3b-446a-b290-7b60cc11a9de] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1003 18:28:33.555175  287189 system_pods.go:89] "snapshot-controller-7d9fbc56b8-k5rg9" [1c8643cd-15e1-4798-916a-253affe08a69] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1003 18:28:33.555189  287189 system_pods.go:89] "storage-provisioner" [7632d49f-2ddc-429b-a88b-02e68f1b42e3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1003 18:28:33.555217  287189 retry.go:31] will retry after 295.743819ms: missing components: kube-dns
	I1003 18:28:33.747816  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:33.849840  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:33.952057  287189 system_pods.go:86] 19 kube-system pods found
	I1003 18:28:33.952103  287189 system_pods.go:89] "coredns-66bc5c9577-2hhqm" [daea3b45-b31f-453a-80f5-c30f7fce4122] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1003 18:28:33.952117  287189 system_pods.go:89] "csi-hostpath-attacher-0" [376ecb21-1ca4-4f77-bac5-a4b5af7ccfdd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1003 18:28:33.952124  287189 system_pods.go:89] "csi-hostpath-resizer-0" [8450e23b-d7f0-4b50-a20c-a7fc38411191] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1003 18:28:33.952131  287189 system_pods.go:89] "csi-hostpathplugin-vsbgb" [e6597406-4522-46da-ad41-da01126918f9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1003 18:28:33.952139  287189 system_pods.go:89] "etcd-addons-952140" [6c2991f4-ee56-4fbb-8f55-bf86ce3c8bc3] Running
	I1003 18:28:33.952155  287189 system_pods.go:89] "kindnet-vx5lb" [39f3102d-aa3a-4a72-b884-4fcf57878faf] Running
	I1003 18:28:33.952164  287189 system_pods.go:89] "kube-apiserver-addons-952140" [70a4748f-eedd-41aa-8ade-b8d13f6c85fe] Running
	I1003 18:28:33.952174  287189 system_pods.go:89] "kube-controller-manager-addons-952140" [7710c628-50b2-44d1-9faa-7ba463e404c9] Running
	I1003 18:28:33.952187  287189 system_pods.go:89] "kube-ingress-dns-minikube" [fbc268d3-be63-48bd-a93c-f3466f7458ed] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1003 18:28:33.952191  287189 system_pods.go:89] "kube-proxy-5hd7r" [674b4e86-cafa-4e3f-8b57-719de4a646f5] Running
	I1003 18:28:33.952197  287189 system_pods.go:89] "kube-scheduler-addons-952140" [ef6d468e-24f4-474f-adeb-1d9e9cf74c87] Running
	I1003 18:28:33.952204  287189 system_pods.go:89] "metrics-server-85b7d694d7-tscmk" [51883ecf-f53c-4001-af25-5785ed3fa7db] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1003 18:28:33.952227  287189 system_pods.go:89] "nvidia-device-plugin-daemonset-84v2d" [c0869084-f969-40cf-8475-57eedeb02a93] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1003 18:28:33.952240  287189 system_pods.go:89] "registry-66898fdd98-88sgc" [749ffc38-9d67-4777-b96d-422ce39f2b46] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1003 18:28:33.952250  287189 system_pods.go:89] "registry-creds-764b6fb674-dqntl" [57dce88b-cd6c-4f39-babf-2079e2174e05] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1003 18:28:33.952264  287189 system_pods.go:89] "registry-proxy-4nwwr" [5ad2d6c8-13b3-4729-a243-b2881c6c7d2b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1003 18:28:33.952272  287189 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ct6ht" [90ef8c16-dc3b-446a-b290-7b60cc11a9de] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1003 18:28:33.952283  287189 system_pods.go:89] "snapshot-controller-7d9fbc56b8-k5rg9" [1c8643cd-15e1-4798-916a-253affe08a69] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1003 18:28:33.952289  287189 system_pods.go:89] "storage-provisioner" [7632d49f-2ddc-429b-a88b-02e68f1b42e3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1003 18:28:33.952312  287189 retry.go:31] will retry after 463.876902ms: missing components: kube-dns
	I1003 18:28:34.051191  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:34.051330  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:34.247166  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:34.287672  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:34.421608  287189 system_pods.go:86] 19 kube-system pods found
	I1003 18:28:34.421643  287189 system_pods.go:89] "coredns-66bc5c9577-2hhqm" [daea3b45-b31f-453a-80f5-c30f7fce4122] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1003 18:28:34.421652  287189 system_pods.go:89] "csi-hostpath-attacher-0" [376ecb21-1ca4-4f77-bac5-a4b5af7ccfdd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1003 18:28:34.421660  287189 system_pods.go:89] "csi-hostpath-resizer-0" [8450e23b-d7f0-4b50-a20c-a7fc38411191] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1003 18:28:34.421675  287189 system_pods.go:89] "csi-hostpathplugin-vsbgb" [e6597406-4522-46da-ad41-da01126918f9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1003 18:28:34.421683  287189 system_pods.go:89] "etcd-addons-952140" [6c2991f4-ee56-4fbb-8f55-bf86ce3c8bc3] Running
	I1003 18:28:34.421689  287189 system_pods.go:89] "kindnet-vx5lb" [39f3102d-aa3a-4a72-b884-4fcf57878faf] Running
	I1003 18:28:34.421700  287189 system_pods.go:89] "kube-apiserver-addons-952140" [70a4748f-eedd-41aa-8ade-b8d13f6c85fe] Running
	I1003 18:28:34.421704  287189 system_pods.go:89] "kube-controller-manager-addons-952140" [7710c628-50b2-44d1-9faa-7ba463e404c9] Running
	I1003 18:28:34.421711  287189 system_pods.go:89] "kube-ingress-dns-minikube" [fbc268d3-be63-48bd-a93c-f3466f7458ed] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1003 18:28:34.421720  287189 system_pods.go:89] "kube-proxy-5hd7r" [674b4e86-cafa-4e3f-8b57-719de4a646f5] Running
	I1003 18:28:34.421724  287189 system_pods.go:89] "kube-scheduler-addons-952140" [ef6d468e-24f4-474f-adeb-1d9e9cf74c87] Running
	I1003 18:28:34.421732  287189 system_pods.go:89] "metrics-server-85b7d694d7-tscmk" [51883ecf-f53c-4001-af25-5785ed3fa7db] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1003 18:28:34.421752  287189 system_pods.go:89] "nvidia-device-plugin-daemonset-84v2d" [c0869084-f969-40cf-8475-57eedeb02a93] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1003 18:28:34.421760  287189 system_pods.go:89] "registry-66898fdd98-88sgc" [749ffc38-9d67-4777-b96d-422ce39f2b46] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1003 18:28:34.421768  287189 system_pods.go:89] "registry-creds-764b6fb674-dqntl" [57dce88b-cd6c-4f39-babf-2079e2174e05] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1003 18:28:34.421780  287189 system_pods.go:89] "registry-proxy-4nwwr" [5ad2d6c8-13b3-4729-a243-b2881c6c7d2b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1003 18:28:34.421786  287189 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ct6ht" [90ef8c16-dc3b-446a-b290-7b60cc11a9de] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1003 18:28:34.421792  287189 system_pods.go:89] "snapshot-controller-7d9fbc56b8-k5rg9" [1c8643cd-15e1-4798-916a-253affe08a69] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1003 18:28:34.421800  287189 system_pods.go:89] "storage-provisioner" [7632d49f-2ddc-429b-a88b-02e68f1b42e3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1003 18:28:34.421826  287189 retry.go:31] will retry after 374.526593ms: missing components: kube-dns
	I1003 18:28:34.522771  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:34.523195  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:34.748517  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:34.788246  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:34.850410  287189 system_pods.go:86] 19 kube-system pods found
	I1003 18:28:34.850500  287189 system_pods.go:89] "coredns-66bc5c9577-2hhqm" [daea3b45-b31f-453a-80f5-c30f7fce4122] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1003 18:28:34.850525  287189 system_pods.go:89] "csi-hostpath-attacher-0" [376ecb21-1ca4-4f77-bac5-a4b5af7ccfdd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1003 18:28:34.850563  287189 system_pods.go:89] "csi-hostpath-resizer-0" [8450e23b-d7f0-4b50-a20c-a7fc38411191] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1003 18:28:34.850590  287189 system_pods.go:89] "csi-hostpathplugin-vsbgb" [e6597406-4522-46da-ad41-da01126918f9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1003 18:28:34.850619  287189 system_pods.go:89] "etcd-addons-952140" [6c2991f4-ee56-4fbb-8f55-bf86ce3c8bc3] Running
	I1003 18:28:34.850640  287189 system_pods.go:89] "kindnet-vx5lb" [39f3102d-aa3a-4a72-b884-4fcf57878faf] Running
	I1003 18:28:34.850670  287189 system_pods.go:89] "kube-apiserver-addons-952140" [70a4748f-eedd-41aa-8ade-b8d13f6c85fe] Running
	I1003 18:28:34.850698  287189 system_pods.go:89] "kube-controller-manager-addons-952140" [7710c628-50b2-44d1-9faa-7ba463e404c9] Running
	I1003 18:28:34.850722  287189 system_pods.go:89] "kube-ingress-dns-minikube" [fbc268d3-be63-48bd-a93c-f3466f7458ed] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1003 18:28:34.850740  287189 system_pods.go:89] "kube-proxy-5hd7r" [674b4e86-cafa-4e3f-8b57-719de4a646f5] Running
	I1003 18:28:34.850774  287189 system_pods.go:89] "kube-scheduler-addons-952140" [ef6d468e-24f4-474f-adeb-1d9e9cf74c87] Running
	I1003 18:28:34.850800  287189 system_pods.go:89] "metrics-server-85b7d694d7-tscmk" [51883ecf-f53c-4001-af25-5785ed3fa7db] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1003 18:28:34.850823  287189 system_pods.go:89] "nvidia-device-plugin-daemonset-84v2d" [c0869084-f969-40cf-8475-57eedeb02a93] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1003 18:28:34.850843  287189 system_pods.go:89] "registry-66898fdd98-88sgc" [749ffc38-9d67-4777-b96d-422ce39f2b46] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1003 18:28:34.850875  287189 system_pods.go:89] "registry-creds-764b6fb674-dqntl" [57dce88b-cd6c-4f39-babf-2079e2174e05] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1003 18:28:34.850899  287189 system_pods.go:89] "registry-proxy-4nwwr" [5ad2d6c8-13b3-4729-a243-b2881c6c7d2b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1003 18:28:34.850917  287189 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ct6ht" [90ef8c16-dc3b-446a-b290-7b60cc11a9de] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1003 18:28:34.850937  287189 system_pods.go:89] "snapshot-controller-7d9fbc56b8-k5rg9" [1c8643cd-15e1-4798-916a-253affe08a69] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1003 18:28:34.850957  287189 system_pods.go:89] "storage-provisioner" [7632d49f-2ddc-429b-a88b-02e68f1b42e3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1003 18:28:34.850996  287189 retry.go:31] will retry after 632.453233ms: missing components: kube-dns
	I1003 18:28:35.023288  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:35.023804  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:35.247576  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:35.288178  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:35.489626  287189 system_pods.go:86] 19 kube-system pods found
	I1003 18:28:35.489676  287189 system_pods.go:89] "coredns-66bc5c9577-2hhqm" [daea3b45-b31f-453a-80f5-c30f7fce4122] Running
	I1003 18:28:35.489689  287189 system_pods.go:89] "csi-hostpath-attacher-0" [376ecb21-1ca4-4f77-bac5-a4b5af7ccfdd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1003 18:28:35.489699  287189 system_pods.go:89] "csi-hostpath-resizer-0" [8450e23b-d7f0-4b50-a20c-a7fc38411191] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1003 18:28:35.489716  287189 system_pods.go:89] "csi-hostpathplugin-vsbgb" [e6597406-4522-46da-ad41-da01126918f9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1003 18:28:35.489731  287189 system_pods.go:89] "etcd-addons-952140" [6c2991f4-ee56-4fbb-8f55-bf86ce3c8bc3] Running
	I1003 18:28:35.489743  287189 system_pods.go:89] "kindnet-vx5lb" [39f3102d-aa3a-4a72-b884-4fcf57878faf] Running
	I1003 18:28:35.489748  287189 system_pods.go:89] "kube-apiserver-addons-952140" [70a4748f-eedd-41aa-8ade-b8d13f6c85fe] Running
	I1003 18:28:35.489752  287189 system_pods.go:89] "kube-controller-manager-addons-952140" [7710c628-50b2-44d1-9faa-7ba463e404c9] Running
	I1003 18:28:35.489762  287189 system_pods.go:89] "kube-ingress-dns-minikube" [fbc268d3-be63-48bd-a93c-f3466f7458ed] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1003 18:28:35.489778  287189 system_pods.go:89] "kube-proxy-5hd7r" [674b4e86-cafa-4e3f-8b57-719de4a646f5] Running
	I1003 18:28:35.489785  287189 system_pods.go:89] "kube-scheduler-addons-952140" [ef6d468e-24f4-474f-adeb-1d9e9cf74c87] Running
	I1003 18:28:35.489791  287189 system_pods.go:89] "metrics-server-85b7d694d7-tscmk" [51883ecf-f53c-4001-af25-5785ed3fa7db] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1003 18:28:35.489799  287189 system_pods.go:89] "nvidia-device-plugin-daemonset-84v2d" [c0869084-f969-40cf-8475-57eedeb02a93] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1003 18:28:35.489809  287189 system_pods.go:89] "registry-66898fdd98-88sgc" [749ffc38-9d67-4777-b96d-422ce39f2b46] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1003 18:28:35.489822  287189 system_pods.go:89] "registry-creds-764b6fb674-dqntl" [57dce88b-cd6c-4f39-babf-2079e2174e05] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1003 18:28:35.489827  287189 system_pods.go:89] "registry-proxy-4nwwr" [5ad2d6c8-13b3-4729-a243-b2881c6c7d2b] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1003 18:28:35.489834  287189 system_pods.go:89] "snapshot-controller-7d9fbc56b8-ct6ht" [90ef8c16-dc3b-446a-b290-7b60cc11a9de] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1003 18:28:35.489859  287189 system_pods.go:89] "snapshot-controller-7d9fbc56b8-k5rg9" [1c8643cd-15e1-4798-916a-253affe08a69] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1003 18:28:35.489869  287189 system_pods.go:89] "storage-provisioner" [7632d49f-2ddc-429b-a88b-02e68f1b42e3] Running
	I1003 18:28:35.489878  287189 system_pods.go:126] duration metric: took 2.298036212s to wait for k8s-apps to be running ...
	I1003 18:28:35.489888  287189 system_svc.go:44] waiting for kubelet service to be running ....
	I1003 18:28:35.489973  287189 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 18:28:35.505835  287189 system_svc.go:56] duration metric: took 15.932073ms WaitForService to wait for kubelet
	I1003 18:28:35.505921  287189 kubeadm.go:586] duration metric: took 43.619137647s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 18:28:35.505955  287189 node_conditions.go:102] verifying NodePressure condition ...
	I1003 18:28:35.509616  287189 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1003 18:28:35.509695  287189 node_conditions.go:123] node cpu capacity is 2
	I1003 18:28:35.509723  287189 node_conditions.go:105] duration metric: took 3.736202ms to run NodePressure ...
	I1003 18:28:35.509763  287189 start.go:241] waiting for startup goroutines ...
	I1003 18:28:35.523605  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:35.524693  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:35.748072  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:35.853007  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:36.021062  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:36.022187  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:36.247549  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:36.287585  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:36.523033  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:36.523339  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:36.747950  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:36.787618  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:37.026567  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:37.026840  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:37.247118  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:37.287975  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:37.521479  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:37.521873  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:37.747527  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:37.787986  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:38.024451  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:38.024785  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:38.247882  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:38.287612  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:38.522576  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:38.522806  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:38.748179  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:38.787220  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:39.029453  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:39.029500  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:39.247691  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:39.287834  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:39.523239  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:39.523522  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:39.747908  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:39.787841  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:40.024756  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:40.025306  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:40.248152  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:40.287636  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:40.521526  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:40.521669  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:40.747686  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:40.788079  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:41.026921  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:41.027340  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:41.247966  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:41.287776  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:41.522160  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:41.522793  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:41.746938  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:41.787254  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:42.023432  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:42.024609  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:42.249542  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:42.349312  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:42.521949  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:42.522412  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:42.747480  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:42.788127  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:43.021932  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:43.023247  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:43.248100  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:43.287988  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:43.521705  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:43.522213  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:43.747373  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:43.787795  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:44.022579  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:44.023067  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:44.248345  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:44.287912  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:44.520924  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:44.523144  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:44.747261  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:44.787397  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:45.037386  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:45.045151  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:45.248613  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:45.291583  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:45.524043  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:45.524640  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:45.747821  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:45.787935  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:46.024100  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:46.024598  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:46.248066  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:46.287633  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:46.525795  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:46.526258  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:46.747700  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:46.787797  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:47.022727  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:47.023236  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:47.247367  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:47.287515  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:47.522252  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:47.522617  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:47.747538  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:47.787580  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:48.022780  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:48.023565  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:48.248440  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:48.288379  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:48.313869  287189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1003 18:28:48.523570  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:48.524016  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:48.747168  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:48.787681  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:49.034754  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:49.036447  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:49.246998  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:49.287887  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:49.413195  287189 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.099286831s)
	W1003 18:28:49.413234  287189 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:28:49.413258  287189 retry.go:31] will retry after 31.075300228s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1003 18:28:49.521991  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:49.522124  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:49.747143  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:49.787107  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:50.024250  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:50.024390  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:50.247749  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:50.287449  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:50.523574  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:50.523711  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:50.747046  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:50.787786  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:51.022990  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:51.023160  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:51.248255  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:51.287849  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:51.522253  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:51.523809  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:51.747032  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:51.787694  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:52.021379  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:52.024131  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:52.247633  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:52.289591  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:52.527843  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:52.528837  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:52.747171  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:52.787820  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:53.024000  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:53.024327  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:53.249103  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:53.289677  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:53.522886  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:53.523025  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:53.747309  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:53.788164  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:54.024120  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:54.024429  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:54.255232  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:54.287561  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:54.522688  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:54.523792  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:54.746720  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:54.788064  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:55.025952  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:55.026477  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:55.253046  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:55.298189  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:55.524595  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:55.525002  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:55.748425  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:55.788062  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:56.023150  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:56.023518  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:56.247541  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:56.287976  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:56.521760  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:56.523005  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:56.747368  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:56.787889  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:57.023599  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:57.024018  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:57.247400  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:57.287794  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:57.524230  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:57.524699  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:57.747683  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:57.787732  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:58.036025  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:58.036550  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:58.248530  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:58.287925  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:58.521462  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:58.523214  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:58.747184  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:58.788595  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:59.022257  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:59.023503  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:59.247419  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:59.288057  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:28:59.523209  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:28:59.525605  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:28:59.746930  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:28:59.788118  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:00.023123  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:00.024415  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:29:00.288056  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:00.306708  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:00.522876  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:29:00.523023  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:00.747568  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:00.787842  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:01.020754  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:01.022678  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:29:01.247382  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:01.287858  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:01.523169  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:29:01.523746  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:01.747131  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:01.787455  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:02.022655  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:29:02.022850  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:02.247565  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:02.288473  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:02.524164  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:02.524802  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:29:02.747204  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:02.787719  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:03.021247  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:03.022948  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:29:03.246659  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:03.287356  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:03.523598  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:03.525784  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:29:03.750285  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:03.788289  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:04.022583  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:29:04.022764  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:04.246642  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:04.288818  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:04.523687  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:04.524279  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:29:04.747687  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:04.788272  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:05.024069  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:05.024535  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:29:05.247498  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:05.287797  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:05.521200  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:05.522479  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:29:05.747728  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:05.788131  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:06.021776  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:06.024397  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:29:06.247627  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:06.288915  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:06.522021  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:06.523673  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:29:06.747624  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:06.787595  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:07.022878  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:29:07.023138  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:07.247384  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:07.288141  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:07.523807  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:07.524396  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:29:07.747794  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:07.787821  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:08.023489  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:29:08.024135  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:08.247310  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:08.287771  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:08.523782  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:29:08.524050  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:08.747362  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:08.788436  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:09.023839  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:09.024243  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:29:09.247390  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:09.287288  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:09.521479  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:09.522960  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:29:09.747047  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:09.787205  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:10.024129  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:10.024795  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:29:10.248395  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:10.288582  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:10.525468  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:10.527174  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:29:10.747478  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:10.788089  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:11.022595  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:11.022933  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:29:11.248100  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:11.288508  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:11.523105  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:29:11.523148  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:11.747391  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:11.788252  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:12.023485  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:29:12.023678  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:12.270521  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:12.303366  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:12.522263  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:12.522307  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:29:12.747376  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:12.787712  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:13.026769  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:29:13.026947  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:13.247324  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:13.287652  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:13.522205  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:13.522416  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:29:13.747819  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:13.787547  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:14.022953  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:29:14.023922  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:14.246932  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:14.288822  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:14.523530  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:29:14.523869  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:14.747033  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:14.787743  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:15.031877  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:29:15.032250  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:15.247090  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:15.287400  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:15.524082  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:29:15.524385  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:15.747659  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:15.848367  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:16.022825  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1003 18:29:16.023028  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:16.248004  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:16.287693  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:16.523974  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:16.524067  287189 kapi.go:107] duration metric: took 1m18.505203594s to wait for kubernetes.io/minikube-addons=registry ...
	I1003 18:29:16.746785  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:16.788081  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:17.021843  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:17.251389  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:17.288439  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:17.522316  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:17.747871  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:17.789475  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:18.022498  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:18.248585  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:18.288620  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:18.523490  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:18.747542  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:18.787591  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:19.021717  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:19.252694  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:19.287946  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:19.521767  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:19.746495  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:19.787403  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:20.022013  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:20.247136  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:20.287634  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:20.489073  287189 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1003 18:29:20.521991  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:20.746858  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:20.787428  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:21.021438  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:21.247100  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:21.287329  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:21.521832  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:21.575935  287189 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.086825771s)
	W1003 18:29:21.575978  287189 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1003 18:29:21.576058  287189 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1003 18:29:21.747146  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:21.787541  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:22.022468  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:22.247580  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:22.348785  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:22.522427  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:22.747644  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:22.788279  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:23.021611  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:23.246819  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:23.288520  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:23.521804  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:23.747187  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:23.788129  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:24.021888  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:24.246933  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:24.288411  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:24.522258  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:24.747605  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:24.788035  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:25.021893  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:25.247353  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:25.287881  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:25.521602  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:25.748799  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:25.787910  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:26.021680  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:26.247697  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:26.288166  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:26.521906  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:26.747080  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:26.787461  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:27.021878  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:27.246773  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:27.287704  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:27.521537  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:27.747747  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:27.787860  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:28.028265  287189 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1003 18:29:28.247464  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:28.301702  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:28.522496  287189 kapi.go:107] duration metric: took 1m30.504588411s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1003 18:29:28.747534  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:28.787697  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:29.334196  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:29.334511  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:29.747260  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:29.787717  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:30.247229  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:30.288013  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:30.747803  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:30.787087  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:31.247211  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1003 18:29:31.287083  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:31.747812  287189 kapi.go:107] duration metric: took 1m30.003954252s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1003 18:29:31.750935  287189 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-952140 cluster.
	I1003 18:29:31.754029  287189 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1003 18:29:31.757342  287189 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1003 18:29:31.787339  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:32.287733  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:32.787869  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:33.287608  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:33.787037  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:34.288117  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:34.787560  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:35.286877  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:35.788011  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:36.288315  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:36.788306  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:37.292152  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:37.794105  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:38.294232  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:38.788565  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:39.293698  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:39.788174  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:40.287870  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:40.787987  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:41.287959  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:41.787864  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:42.287597  287189 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1003 18:29:42.787522  287189 kapi.go:107] duration metric: took 1m44.503650299s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1003 18:29:42.790777  287189 out.go:179] * Enabled addons: registry-creds, nvidia-device-plugin, storage-provisioner, cloud-spanner, ingress-dns, amd-gpu-device-plugin, storage-provisioner-rancher, metrics-server, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1003 18:29:42.793786  287189 addons.go:514] duration metric: took 1m50.907714201s for enable addons: enabled=[registry-creds nvidia-device-plugin storage-provisioner cloud-spanner ingress-dns amd-gpu-device-plugin storage-provisioner-rancher metrics-server yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1003 18:29:42.793867  287189 start.go:246] waiting for cluster config update ...
	I1003 18:29:42.793913  287189 start.go:255] writing updated cluster config ...
	I1003 18:29:42.794250  287189 ssh_runner.go:195] Run: rm -f paused
	I1003 18:29:42.797821  287189 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1003 18:29:42.802183  287189 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-2hhqm" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 18:29:42.808836  287189 pod_ready.go:94] pod "coredns-66bc5c9577-2hhqm" is "Ready"
	I1003 18:29:42.808865  287189 pod_ready.go:86] duration metric: took 6.652804ms for pod "coredns-66bc5c9577-2hhqm" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 18:29:42.810967  287189 pod_ready.go:83] waiting for pod "etcd-addons-952140" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 18:29:42.814940  287189 pod_ready.go:94] pod "etcd-addons-952140" is "Ready"
	I1003 18:29:42.814964  287189 pod_ready.go:86] duration metric: took 3.975951ms for pod "etcd-addons-952140" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 18:29:42.817323  287189 pod_ready.go:83] waiting for pod "kube-apiserver-addons-952140" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 18:29:42.822282  287189 pod_ready.go:94] pod "kube-apiserver-addons-952140" is "Ready"
	I1003 18:29:42.822310  287189 pod_ready.go:86] duration metric: took 4.962943ms for pod "kube-apiserver-addons-952140" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 18:29:42.824755  287189 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-952140" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 18:29:43.203261  287189 pod_ready.go:94] pod "kube-controller-manager-addons-952140" is "Ready"
	I1003 18:29:43.203291  287189 pod_ready.go:86] duration metric: took 378.5099ms for pod "kube-controller-manager-addons-952140" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 18:29:43.403250  287189 pod_ready.go:83] waiting for pod "kube-proxy-5hd7r" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 18:29:43.802480  287189 pod_ready.go:94] pod "kube-proxy-5hd7r" is "Ready"
	I1003 18:29:43.802508  287189 pod_ready.go:86] duration metric: took 399.228533ms for pod "kube-proxy-5hd7r" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 18:29:44.002998  287189 pod_ready.go:83] waiting for pod "kube-scheduler-addons-952140" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 18:29:44.403334  287189 pod_ready.go:94] pod "kube-scheduler-addons-952140" is "Ready"
	I1003 18:29:44.403361  287189 pod_ready.go:86] duration metric: took 400.338076ms for pod "kube-scheduler-addons-952140" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 18:29:44.403373  287189 pod_ready.go:40] duration metric: took 1.605518935s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1003 18:29:44.460224  287189 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1003 18:29:44.463385  287189 out.go:179] * Done! kubectl is now configured to use "addons-952140" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 03 18:29:45 addons-952140 crio[830]: time="2025-10-03T18:29:45.628441604Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 03 18:29:46 addons-952140 crio[830]: time="2025-10-03T18:29:46.55661584Z" level=info msg="Removing container: c79a52dacfab2a61291462edcd3d1e76a3797ca712370213702729b69036e0f7" id=77ae5244-508f-47be-8aaf-3da7acaf4186 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 03 18:29:46 addons-952140 crio[830]: time="2025-10-03T18:29:46.558938518Z" level=info msg="Error loading conmon cgroup of container c79a52dacfab2a61291462edcd3d1e76a3797ca712370213702729b69036e0f7: cgroup deleted" id=77ae5244-508f-47be-8aaf-3da7acaf4186 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 03 18:29:46 addons-952140 crio[830]: time="2025-10-03T18:29:46.567009028Z" level=info msg="Removed container c79a52dacfab2a61291462edcd3d1e76a3797ca712370213702729b69036e0f7: gcp-auth/gcp-auth-certs-create-nv4gb/create" id=77ae5244-508f-47be-8aaf-3da7acaf4186 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 03 18:29:46 addons-952140 crio[830]: time="2025-10-03T18:29:46.56850258Z" level=info msg="Removing container: c73302061d24f4ff312a24a44df6c977ebbfbfb675e26ee1a7a1f29ca3971d99" id=af014655-1e35-468f-9ec2-2abe3095be67 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 03 18:29:46 addons-952140 crio[830]: time="2025-10-03T18:29:46.571108315Z" level=info msg="Error loading conmon cgroup of container c73302061d24f4ff312a24a44df6c977ebbfbfb675e26ee1a7a1f29ca3971d99: cgroup deleted" id=af014655-1e35-468f-9ec2-2abe3095be67 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 03 18:29:46 addons-952140 crio[830]: time="2025-10-03T18:29:46.575629998Z" level=info msg="Removed container c73302061d24f4ff312a24a44df6c977ebbfbfb675e26ee1a7a1f29ca3971d99: gcp-auth/gcp-auth-certs-patch-6mk8n/patch" id=af014655-1e35-468f-9ec2-2abe3095be67 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 03 18:29:46 addons-952140 crio[830]: time="2025-10-03T18:29:46.578470747Z" level=info msg="Stopping pod sandbox: 8248e9dcc57c0c27ea4b69cddce21ff246e7f77997bfe2218b58f1866ddca2da" id=a70757c5-6bf2-42ee-a3ff-5f11dcde9aa5 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 03 18:29:46 addons-952140 crio[830]: time="2025-10-03T18:29:46.578657989Z" level=info msg="Stopped pod sandbox (already stopped): 8248e9dcc57c0c27ea4b69cddce21ff246e7f77997bfe2218b58f1866ddca2da" id=a70757c5-6bf2-42ee-a3ff-5f11dcde9aa5 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 03 18:29:46 addons-952140 crio[830]: time="2025-10-03T18:29:46.579216038Z" level=info msg="Removing pod sandbox: 8248e9dcc57c0c27ea4b69cddce21ff246e7f77997bfe2218b58f1866ddca2da" id=95c51cb3-337a-4893-84a9-7eadded9c1d4 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 03 18:29:46 addons-952140 crio[830]: time="2025-10-03T18:29:46.583823845Z" level=info msg="Removed pod sandbox: 8248e9dcc57c0c27ea4b69cddce21ff246e7f77997bfe2218b58f1866ddca2da" id=95c51cb3-337a-4893-84a9-7eadded9c1d4 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 03 18:29:46 addons-952140 crio[830]: time="2025-10-03T18:29:46.584470407Z" level=info msg="Stopping pod sandbox: fae559ae554d9410c16c4325521341dabe69526866a320dda65443f5793b73d1" id=9c2966a7-8272-469a-b3f8-bd88de353909 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 03 18:29:46 addons-952140 crio[830]: time="2025-10-03T18:29:46.584532227Z" level=info msg="Stopped pod sandbox (already stopped): fae559ae554d9410c16c4325521341dabe69526866a320dda65443f5793b73d1" id=9c2966a7-8272-469a-b3f8-bd88de353909 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 03 18:29:46 addons-952140 crio[830]: time="2025-10-03T18:29:46.585019758Z" level=info msg="Removing pod sandbox: fae559ae554d9410c16c4325521341dabe69526866a320dda65443f5793b73d1" id=16d788a4-6743-4974-99fd-29d5f0b4e02a name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 03 18:29:46 addons-952140 crio[830]: time="2025-10-03T18:29:46.593642754Z" level=info msg="Removed pod sandbox: fae559ae554d9410c16c4325521341dabe69526866a320dda65443f5793b73d1" id=16d788a4-6743-4974-99fd-29d5f0b4e02a name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 03 18:29:47 addons-952140 crio[830]: time="2025-10-03T18:29:47.630459954Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=5952d484-36b5-4d45-ad30-dbfe90171159 name=/runtime.v1.ImageService/PullImage
	Oct 03 18:29:47 addons-952140 crio[830]: time="2025-10-03T18:29:47.631111316Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c1869644-5222-4f93-ac39-e1b765ca714d name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:29:47 addons-952140 crio[830]: time="2025-10-03T18:29:47.633019918Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=761e4271-fdcd-47cf-aa9d-1ec44409ac1b name=/runtime.v1.ImageService/ImageStatus
	Oct 03 18:29:47 addons-952140 crio[830]: time="2025-10-03T18:29:47.639817096Z" level=info msg="Creating container: default/busybox/busybox" id=f543b22b-f07b-47e2-953e-6c780d2fe3e0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:29:47 addons-952140 crio[830]: time="2025-10-03T18:29:47.640600223Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:29:47 addons-952140 crio[830]: time="2025-10-03T18:29:47.647361223Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:29:47 addons-952140 crio[830]: time="2025-10-03T18:29:47.648034977Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 18:29:47 addons-952140 crio[830]: time="2025-10-03T18:29:47.668403914Z" level=info msg="Created container b6c3eb481631c0deb3fe19c671c848914498e1bd9505c95f23b4a6f4b5586503: default/busybox/busybox" id=f543b22b-f07b-47e2-953e-6c780d2fe3e0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 18:29:47 addons-952140 crio[830]: time="2025-10-03T18:29:47.669766029Z" level=info msg="Starting container: b6c3eb481631c0deb3fe19c671c848914498e1bd9505c95f23b4a6f4b5586503" id=150b2c4a-f8b7-4a52-8c85-a61145fc41a2 name=/runtime.v1.RuntimeService/StartContainer
	Oct 03 18:29:47 addons-952140 crio[830]: time="2025-10-03T18:29:47.672245892Z" level=info msg="Started container" PID=4901 containerID=b6c3eb481631c0deb3fe19c671c848914498e1bd9505c95f23b4a6f4b5586503 description=default/busybox/busybox id=150b2c4a-f8b7-4a52-8c85-a61145fc41a2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=68836f5fdbebef9b27c44b8951eed4ed1f60772b34499d10ebbcd02ffa3b0c33
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	b6c3eb481631c       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          7 seconds ago        Running             busybox                                  0                   68836f5fdbebe       busybox                                    default
	a2a54b8525b1b       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          14 seconds ago       Running             csi-snapshotter                          0                   90055626cb73d       csi-hostpathplugin-vsbgb                   kube-system
	764f61b1d1b52       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          15 seconds ago       Running             csi-provisioner                          0                   90055626cb73d       csi-hostpathplugin-vsbgb                   kube-system
	5520f176a27b0       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            16 seconds ago       Running             liveness-probe                           0                   90055626cb73d       csi-hostpathplugin-vsbgb                   kube-system
	a55dd027b4c24       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           17 seconds ago       Running             hostpath                                 0                   90055626cb73d       csi-hostpathplugin-vsbgb                   kube-system
	58f575a0718cd       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:74b72c3673aff7e1fa7c3ebae80b5dbe5446ce1906ef8d4f98d4b9f6e72c88e1                            19 seconds ago       Running             gadget                                   0                   b4dfe4fefc5a8       gadget-8d4lm                               gadget
	d11765424ad97       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                23 seconds ago       Running             node-driver-registrar                    0                   90055626cb73d       csi-hostpathplugin-vsbgb                   kube-system
	1ca7b1012478e       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 24 seconds ago       Running             gcp-auth                                 0                   8f8339c688744       gcp-auth-78565c9fb4-qh9mv                  gcp-auth
	1a1f2d65645ab       registry.k8s.io/ingress-nginx/controller@sha256:f99290cbebde470590890356f061fd429ff3def99cc2dedb1fcd21626c5d73d6                             27 seconds ago       Running             controller                               0                   d2ef79573359b       ingress-nginx-controller-9cc49f96f-dwspc   ingress-nginx
	c019dcc46e8b9       gcr.io/cloud-spanner-emulator/emulator@sha256:77d0cd8103fe32875bbb04c070a7d1db292093b65d11c99c00cf39e8a13852f5                               33 seconds ago       Running             cloud-spanner-emulator                   0                   7d71f40dc2bb7       cloud-spanner-emulator-85f6b7fc65-thvpj    default
	ba5695d849b4f       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        38 seconds ago       Running             metrics-server                           0                   db535e031c153       metrics-server-85b7d694d7-tscmk            kube-system
	351cf9cd8e8f8       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              40 seconds ago       Running             registry-proxy                           0                   2219cf3ff875f       registry-proxy-4nwwr                       kube-system
	c2d0db82bc7f2       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      43 seconds ago       Running             volume-snapshot-controller               0                   183f8955b2cb3       snapshot-controller-7d9fbc56b8-k5rg9       kube-system
	5925d6c423d79       nvcr.io/nvidia/k8s-device-plugin@sha256:206d989142113ab71eaf27958a0e0a203f40103cf5b48890f5de80fd1b3fcfde                                     44 seconds ago       Running             nvidia-device-plugin-ctr                 0                   ddd6b9140c114       nvidia-device-plugin-daemonset-84v2d       kube-system
	11cc9a267a159       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              57 seconds ago       Running             yakd                                     0                   7ceb2f5a27e72       yakd-dashboard-5ff678cb9-ccz5v             yakd-dashboard
	228036e3d3021       docker.io/library/registry@sha256:f26c394e5b7c3a707c7373c3e9388e44f0d5bdd3def19652c6bd2ac1a0fa6758                                           About a minute ago   Running             registry                                 0                   b30d8d5e57e72       registry-66898fdd98-88sgc                  kube-system
	d38c57e36e359       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               About a minute ago   Running             minikube-ingress-dns                     0                   0b5e358299c34       kube-ingress-dns-minikube                  kube-system
	c8b82f114f8e3       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:73b47a951627d604fcf1cf93ddc15004fe3854f881da22f690854d098255f1c1                   About a minute ago   Exited              patch                                    0                   d91a9f6ff6792       ingress-nginx-admission-patch-bpnzz        ingress-nginx
	8ab3974a2c302       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   About a minute ago   Running             csi-external-health-monitor-controller   0                   90055626cb73d       csi-hostpathplugin-vsbgb                   kube-system
	70497b5707570       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   6a79f287a52e1       snapshot-controller-7d9fbc56b8-ct6ht       kube-system
	26742750260bf       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              About a minute ago   Running             csi-resizer                              0                   daef42118870d       csi-hostpath-resizer-0                     kube-system
	2bb1e9011f7aa       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             About a minute ago   Running             local-path-provisioner                   0                   d131b686b6646       local-path-provisioner-648f6765c9-rrkgn    local-path-storage
	7099c81ca982b       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             About a minute ago   Running             csi-attacher                             0                   4a984a0357c34       csi-hostpath-attacher-0                    kube-system
	036cd246674ae       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:73b47a951627d604fcf1cf93ddc15004fe3854f881da22f690854d098255f1c1                   About a minute ago   Exited              create                                   0                   076b77d645205       ingress-nginx-admission-create-4r899       ingress-nginx
	2657f869bb852       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             About a minute ago   Running             coredns                                  0                   f0a958be4f7ed       coredns-66bc5c9577-2hhqm                   kube-system
	82907fef03cc4       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             About a minute ago   Running             storage-provisioner                      0                   7af83c95d09e5       storage-provisioner                        kube-system
	28257b7548dee       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             2 minutes ago        Running             kube-proxy                               0                   2388d4ea56ec5       kube-proxy-5hd7r                           kube-system
	1a59139ec0fac       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             2 minutes ago        Running             kindnet-cni                              0                   e31991cd5cf89       kindnet-vx5lb                              kube-system
	23bd53ece83d0       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             2 minutes ago        Running             kube-apiserver                           0                   3fc39c1a47af7       kube-apiserver-addons-952140               kube-system
	1cbcaf90a2815       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             2 minutes ago        Running             kube-controller-manager                  0                   d76f4f0c6cff8       kube-controller-manager-addons-952140      kube-system
	22981c6dff74a       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             2 minutes ago        Running             kube-scheduler                           0                   628e6509a3941       kube-scheduler-addons-952140               kube-system
	e937e437e1e79       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             2 minutes ago        Running             etcd                                     0                   82b3b88c261f4       etcd-addons-952140                         kube-system
	
	
	==> coredns [2657f869bb8529138f74b802beedcd922a626ac30c50e54c72731eaff1b930c0] <==
	[INFO] 10.244.0.13:51292 - 31690 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000315412s
	[INFO] 10.244.0.13:51292 - 19902 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002194799s
	[INFO] 10.244.0.13:51292 - 40170 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002266458s
	[INFO] 10.244.0.13:51292 - 5930 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000111078s
	[INFO] 10.244.0.13:51292 - 49491 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000163544s
	[INFO] 10.244.0.13:42123 - 41584 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000158727s
	[INFO] 10.244.0.13:42123 - 41346 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000074055s
	[INFO] 10.244.0.13:41541 - 33908 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00009223s
	[INFO] 10.244.0.13:41541 - 33687 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000101379s
	[INFO] 10.244.0.13:38400 - 6582 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000085067s
	[INFO] 10.244.0.13:38400 - 6385 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000061353s
	[INFO] 10.244.0.13:46743 - 59576 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.006130656s
	[INFO] 10.244.0.13:46743 - 59371 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.006163954s
	[INFO] 10.244.0.13:36530 - 48295 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000113745s
	[INFO] 10.244.0.13:36530 - 48123 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000185732s
	[INFO] 10.244.0.20:56295 - 61874 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000176755s
	[INFO] 10.244.0.20:51460 - 16994 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000160557s
	[INFO] 10.244.0.20:41689 - 53547 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000285716s
	[INFO] 10.244.0.20:42517 - 30107 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000325357s
	[INFO] 10.244.0.20:43953 - 60196 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00016873s
	[INFO] 10.244.0.20:41318 - 12340 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000255184s
	[INFO] 10.244.0.20:51545 - 57495 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.004904901s
	[INFO] 10.244.0.20:51299 - 54833 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.004941145s
	[INFO] 10.244.0.20:58955 - 18158 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003360786s
	[INFO] 10.244.0.20:43220 - 1055 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.002768019s
	
	
	==> describe nodes <==
	Name:               addons-952140
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-952140
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a43873c79fc22f8b1ccd29d3dfa635d392b09335
	                    minikube.k8s.io/name=addons-952140
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_03T18_27_47_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-952140
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-952140"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 03 Oct 2025 18:27:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-952140
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 03 Oct 2025 18:29:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 03 Oct 2025 18:29:48 +0000   Fri, 03 Oct 2025 18:27:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 03 Oct 2025 18:29:48 +0000   Fri, 03 Oct 2025 18:27:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 03 Oct 2025 18:29:48 +0000   Fri, 03 Oct 2025 18:27:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 03 Oct 2025 18:29:48 +0000   Fri, 03 Oct 2025 18:28:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-952140
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 05cbf1d28c6b4036a123ffa7870f67eb
	  System UUID:                7f98a991-1761-476a-88c4-95c71c61f734
	  Boot ID:                    3762136e-8bec-4104-a5cb-0b1976f6048e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (26 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  default                     cloud-spanner-emulator-85f6b7fc65-thvpj     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m
	  gadget                      gadget-8d4lm                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         118s
	  gcp-auth                    gcp-auth-78565c9fb4-qh9mv                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         114s
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-dwspc    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         118s
	  kube-system                 coredns-66bc5c9577-2hhqm                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m3s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 csi-hostpathplugin-vsbgb                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         83s
	  kube-system                 etcd-addons-952140                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m9s
	  kube-system                 kindnet-vx5lb                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m4s
	  kube-system                 kube-apiserver-addons-952140                250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 kube-controller-manager-addons-952140       200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-proxy-5hd7r                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 kube-scheduler-addons-952140                100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 metrics-server-85b7d694d7-tscmk             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         119s
	  kube-system                 nvidia-device-plugin-daemonset-84v2d        0 (0%)        0 (0%)      0 (0%)           0 (0%)         83s
	  kube-system                 registry-66898fdd98-88sgc                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 registry-creds-764b6fb674-dqntl             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 registry-proxy-4nwwr                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         83s
	  kube-system                 snapshot-controller-7d9fbc56b8-ct6ht        0 (0%)        0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 snapshot-controller-7d9fbc56b8-k5rg9        0 (0%)        0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m
	  local-path-storage          local-path-provisioner-648f6765c9-rrkgn     0 (0%)        0 (0%)      0 (0%)           0 (0%)         119s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-ccz5v              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     118s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m2s                   kube-proxy       
	  Normal   Starting                 2m16s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m16s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m16s (x8 over 2m16s)  kubelet          Node addons-952140 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m16s (x8 over 2m16s)  kubelet          Node addons-952140 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m16s (x8 over 2m16s)  kubelet          Node addons-952140 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m9s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m9s                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m9s                   kubelet          Node addons-952140 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m9s                   kubelet          Node addons-952140 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m9s                   kubelet          Node addons-952140 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m5s                   node-controller  Node addons-952140 event: Registered Node addons-952140 in Controller
	  Normal   NodeReady                83s                    kubelet          Node addons-952140 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 3 17:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.016734] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.507620] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.057770] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.764958] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.639190] kauditd_printk_skb: 36 callbacks suppressed
	[Oct 3 18:16] hrtimer: interrupt took 33359751 ns
	[Oct 3 18:26] kauditd_printk_skb: 8 callbacks suppressed
	[Oct 3 18:27] overlayfs: idmapped layers are currently not supported
	[  +0.053491] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [e937e437e1e79c6bcbb92c82ee9849b6f8ceb2c5980d23b084e27a6fb88ab45a] <==
	{"level":"warn","ts":"2025-10-03T18:27:42.491737Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T18:27:42.514414Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T18:27:42.525454Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T18:27:42.544837Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T18:27:42.558142Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T18:27:42.581981Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T18:27:42.620697Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T18:27:42.657803Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T18:27:42.673648Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T18:27:42.698893Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T18:27:42.720168Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T18:27:42.728243Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T18:27:42.770770Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T18:27:42.773445Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T18:27:42.791492Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T18:27:42.813150Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T18:27:42.850347Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T18:27:42.856773Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T18:27:42.935332Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T18:27:58.752957Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T18:27:58.768294Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T18:28:20.793509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T18:28:20.811206Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T18:28:20.833744Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T18:28:20.849587Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46642","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [1ca7b1012478e6884e729e3e480969ed4ef066a6aa202ed9cc90f7153a4e4320] <==
	2025/10/03 18:29:30 GCP Auth Webhook started!
	2025/10/03 18:29:44 Ready to marshal response ...
	2025/10/03 18:29:44 Ready to write response ...
	2025/10/03 18:29:45 Ready to marshal response ...
	2025/10/03 18:29:45 Ready to write response ...
	2025/10/03 18:29:45 Ready to marshal response ...
	2025/10/03 18:29:45 Ready to write response ...
	
	
	==> kernel <==
	 18:29:56 up  1:12,  0 user,  load average: 2.66, 3.53, 3.78
	Linux addons-952140 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1a59139ec0face1693267071ca3c3ba3e8eff397418ffbf25f3682c68eee244a] <==
	E1003 18:28:22.623171       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1003 18:28:22.623285       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1003 18:28:22.623185       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1003 18:28:22.623365       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1003 18:28:24.023263       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1003 18:28:24.023324       1 metrics.go:72] Registering metrics
	I1003 18:28:24.023377       1 controller.go:711] "Syncing nftables rules"
	I1003 18:28:32.625429       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1003 18:28:32.625474       1 main.go:301] handling current node
	I1003 18:28:42.622203       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1003 18:28:42.622239       1 main.go:301] handling current node
	I1003 18:28:52.624886       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1003 18:28:52.624917       1 main.go:301] handling current node
	I1003 18:29:02.624781       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1003 18:29:02.624813       1 main.go:301] handling current node
	I1003 18:29:12.622869       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1003 18:29:12.622901       1 main.go:301] handling current node
	I1003 18:29:22.622170       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1003 18:29:22.622203       1 main.go:301] handling current node
	I1003 18:29:32.622032       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1003 18:29:32.622082       1 main.go:301] handling current node
	I1003 18:29:42.623053       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1003 18:29:42.623091       1 main.go:301] handling current node
	I1003 18:29:52.622775       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1003 18:29:52.622884       1 main.go:301] handling current node
	
	
	==> kube-apiserver [23bd53ece83d04d894e5fc60fda04a6f8bdfe8d6c59ffad6c4dcacc168ec4ed8] <==
	W1003 18:28:32.831426       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.97.213.134:443: connect: connection refused
	E1003 18:28:32.831498       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.97.213.134:443: connect: connection refused" logger="UnhandledError"
	W1003 18:28:32.909716       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.97.213.134:443: connect: connection refused
	E1003 18:28:32.909759       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.97.213.134:443: connect: connection refused" logger="UnhandledError"
	W1003 18:28:57.557065       1 handler_proxy.go:99] no RequestInfo found in the context
	E1003 18:28:57.557152       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1003 18:28:57.557178       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1003 18:28:57.558341       1 handler_proxy.go:99] no RequestInfo found in the context
	E1003 18:28:57.558379       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1003 18:28:57.558392       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1003 18:29:19.271382       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.168.131:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.168.131:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.168.131:443: connect: connection refused" logger="UnhandledError"
	W1003 18:29:19.271473       1 handler_proxy.go:99] no RequestInfo found in the context
	E1003 18:29:19.271587       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1003 18:29:19.271925       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.168.131:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.168.131:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.168.131:443: connect: connection refused" logger="UnhandledError"
	E1003 18:29:19.277961       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.168.131:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.168.131:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.168.131:443: connect: connection refused" logger="UnhandledError"
	E1003 18:29:19.298919       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.168.131:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.168.131:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.168.131:443: connect: connection refused" logger="UnhandledError"
	I1003 18:29:19.452985       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1003 18:29:53.842366       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:42378: use of closed network connection
	E1003 18:29:53.969474       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:42396: use of closed network connection
	
	
	==> kube-controller-manager [1cbcaf90a28158f2a4d5495c4b92561650195912704daec05dcf1d9b56429e5c] <==
	I1003 18:27:50.803802       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1003 18:27:50.803855       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1003 18:27:50.803897       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1003 18:27:50.804389       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1003 18:27:50.804420       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1003 18:27:50.804666       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1003 18:27:50.804714       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1003 18:27:50.808488       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1003 18:27:50.808519       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1003 18:27:50.808528       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1003 18:27:50.809352       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1003 18:27:50.810133       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1003 18:27:50.810144       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1003 18:27:50.834434       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-952140" podCIDRs=["10.244.0.0/24"]
	E1003 18:27:56.735760       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1003 18:28:20.785627       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1003 18:28:20.785793       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1003 18:28:20.785834       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1003 18:28:20.812643       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1003 18:28:20.821990       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1003 18:28:20.886395       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1003 18:28:20.923064       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1003 18:28:35.758898       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E1003 18:28:50.891943       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1003 18:28:50.930661       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [28257b7548dee5496025c494fc69f7d27b158c004459fe9cf7e145244cc402b4] <==
	I1003 18:27:52.792976       1 server_linux.go:53] "Using iptables proxy"
	I1003 18:27:52.874495       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1003 18:27:52.977101       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1003 18:27:52.977304       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1003 18:27:52.977409       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1003 18:27:53.017887       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1003 18:27:53.017939       1 server_linux.go:132] "Using iptables Proxier"
	I1003 18:27:53.026366       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1003 18:27:53.026724       1 server.go:527] "Version info" version="v1.34.1"
	I1003 18:27:53.026738       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1003 18:27:53.028165       1 config.go:200] "Starting service config controller"
	I1003 18:27:53.028175       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1003 18:27:53.028192       1 config.go:106] "Starting endpoint slice config controller"
	I1003 18:27:53.028197       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1003 18:27:53.028207       1 config.go:403] "Starting serviceCIDR config controller"
	I1003 18:27:53.028210       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1003 18:27:53.034455       1 config.go:309] "Starting node config controller"
	I1003 18:27:53.034472       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1003 18:27:53.034480       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1003 18:27:53.128846       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1003 18:27:53.128880       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1003 18:27:53.128918       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [22981c6dff74a1d10571b76dae9b7bbbb33ca3843ab35927e1e5997100c5be1c] <==
	I1003 18:27:44.049017       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1003 18:27:44.054565       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1003 18:27:44.055901       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1003 18:27:44.067222       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1003 18:27:44.067471       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1003 18:27:44.067663       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1003 18:27:44.067941       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1003 18:27:44.068054       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1003 18:27:44.068298       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1003 18:27:44.069724       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1003 18:27:44.069857       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1003 18:27:44.069967       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1003 18:27:44.070062       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1003 18:27:44.070172       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1003 18:27:44.070280       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1003 18:27:44.070382       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1003 18:27:44.070921       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1003 18:27:44.071089       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1003 18:27:44.071205       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1003 18:27:44.071269       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1003 18:27:44.923454       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1003 18:27:44.967180       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1003 18:27:45.043147       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1003 18:27:45.088399       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I1003 18:27:47.448458       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 03 18:29:12 addons-952140 kubelet[1298]: I1003 18:29:12.538588    1298 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4412cbb-9785-4408-bac9-ec058dfa768f" path="/var/lib/kubelet/pods/e4412cbb-9785-4408-bac9-ec058dfa768f/volumes"
	Oct 03 18:29:13 addons-952140 kubelet[1298]: I1003 18:29:13.025313    1298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/nvidia-device-plugin-daemonset-84v2d" podStartSLOduration=3.302741644 podStartE2EDuration="41.02529251s" podCreationTimestamp="2025-10-03 18:28:32 +0000 UTC" firstStartedPulling="2025-10-03 18:28:33.859415532 +0000 UTC m=+47.459846783" lastFinishedPulling="2025-10-03 18:29:11.581966398 +0000 UTC m=+85.182397649" observedRunningTime="2025-10-03 18:29:12.271012771 +0000 UTC m=+85.871444030" watchObservedRunningTime="2025-10-03 18:29:13.02529251 +0000 UTC m=+86.625723761"
	Oct 03 18:29:13 addons-952140 kubelet[1298]: I1003 18:29:13.185263    1298 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-84v2d" secret="" err="secret \"gcp-auth\" not found"
	Oct 03 18:29:14 addons-952140 kubelet[1298]: I1003 18:29:14.538565    1298 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2494b280-581c-42c5-9bc6-326d6e0b9d8b" path="/var/lib/kubelet/pods/2494b280-581c-42c5-9bc6-326d6e0b9d8b/volumes"
	Oct 03 18:29:16 addons-952140 kubelet[1298]: I1003 18:29:16.209128    1298 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-4nwwr" secret="" err="secret \"gcp-auth\" not found"
	Oct 03 18:29:16 addons-952140 kubelet[1298]: I1003 18:29:16.230395    1298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-proxy-4nwwr" podStartSLOduration=2.657610083 podStartE2EDuration="44.230377337s" podCreationTimestamp="2025-10-03 18:28:32 +0000 UTC" firstStartedPulling="2025-10-03 18:28:33.888304135 +0000 UTC m=+47.488735386" lastFinishedPulling="2025-10-03 18:29:15.461071381 +0000 UTC m=+89.061502640" observedRunningTime="2025-10-03 18:29:16.230116147 +0000 UTC m=+89.830547406" watchObservedRunningTime="2025-10-03 18:29:16.230377337 +0000 UTC m=+89.830808588"
	Oct 03 18:29:16 addons-952140 kubelet[1298]: I1003 18:29:16.231206    1298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/snapshot-controller-7d9fbc56b8-k5rg9" podStartSLOduration=40.817003616 podStartE2EDuration="1m19.231193549s" podCreationTimestamp="2025-10-03 18:27:57 +0000 UTC" firstStartedPulling="2025-10-03 18:28:33.875285483 +0000 UTC m=+47.475716734" lastFinishedPulling="2025-10-03 18:29:12.289475417 +0000 UTC m=+85.889906667" observedRunningTime="2025-10-03 18:29:13.201845046 +0000 UTC m=+86.802276305" watchObservedRunningTime="2025-10-03 18:29:16.231193549 +0000 UTC m=+89.831624808"
	Oct 03 18:29:17 addons-952140 kubelet[1298]: I1003 18:29:17.214146    1298 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-4nwwr" secret="" err="secret \"gcp-auth\" not found"
	Oct 03 18:29:17 addons-952140 kubelet[1298]: I1003 18:29:17.240100    1298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/metrics-server-85b7d694d7-tscmk" podStartSLOduration=38.107177374 podStartE2EDuration="1m21.240081963s" podCreationTimestamp="2025-10-03 18:27:56 +0000 UTC" firstStartedPulling="2025-10-03 18:28:33.889376659 +0000 UTC m=+47.489807910" lastFinishedPulling="2025-10-03 18:29:17.022281248 +0000 UTC m=+90.622712499" observedRunningTime="2025-10-03 18:29:17.232400676 +0000 UTC m=+90.832831943" watchObservedRunningTime="2025-10-03 18:29:17.240081963 +0000 UTC m=+90.840513214"
	Oct 03 18:29:22 addons-952140 kubelet[1298]: I1003 18:29:22.238245    1298 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/cloud-spanner-emulator-85f6b7fc65-thvpj" secret="" err="secret \"gcp-auth\" not found"
	Oct 03 18:29:23 addons-952140 kubelet[1298]: I1003 18:29:23.251091    1298 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/cloud-spanner-emulator-85f6b7fc65-thvpj" secret="" err="secret \"gcp-auth\" not found"
	Oct 03 18:29:28 addons-952140 kubelet[1298]: I1003 18:29:28.284645    1298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/cloud-spanner-emulator-85f6b7fc65-thvpj" podStartSLOduration=47.250939714 podStartE2EDuration="1m33.284626972s" podCreationTimestamp="2025-10-03 18:27:55 +0000 UTC" firstStartedPulling="2025-10-03 18:28:35.758195171 +0000 UTC m=+49.358626421" lastFinishedPulling="2025-10-03 18:29:21.79188242 +0000 UTC m=+95.392313679" observedRunningTime="2025-10-03 18:29:22.26036057 +0000 UTC m=+95.860791845" watchObservedRunningTime="2025-10-03 18:29:28.284626972 +0000 UTC m=+101.885058223"
	Oct 03 18:29:31 addons-952140 kubelet[1298]: I1003 18:29:31.303936    1298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-9cc49f96f-dwspc" podStartSLOduration=43.455874674 podStartE2EDuration="1m34.303917021s" podCreationTimestamp="2025-10-03 18:27:57 +0000 UTC" firstStartedPulling="2025-10-03 18:28:36.97650932 +0000 UTC m=+50.576940571" lastFinishedPulling="2025-10-03 18:29:27.824551618 +0000 UTC m=+101.424982918" observedRunningTime="2025-10-03 18:29:28.296270267 +0000 UTC m=+101.896701526" watchObservedRunningTime="2025-10-03 18:29:31.303917021 +0000 UTC m=+104.904348272"
	Oct 03 18:29:36 addons-952140 kubelet[1298]: I1003 18:29:36.329523    1298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-qh9mv" podStartSLOduration=45.567718636 podStartE2EDuration="1m35.329503282s" podCreationTimestamp="2025-10-03 18:28:01 +0000 UTC" firstStartedPulling="2025-10-03 18:28:41.071585316 +0000 UTC m=+54.672016566" lastFinishedPulling="2025-10-03 18:29:30.833369961 +0000 UTC m=+104.433801212" observedRunningTime="2025-10-03 18:29:31.307576758 +0000 UTC m=+104.908008026" watchObservedRunningTime="2025-10-03 18:29:36.329503282 +0000 UTC m=+109.929934533"
	Oct 03 18:29:37 addons-952140 kubelet[1298]: E1003 18:29:37.065562    1298 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Oct 03 18:29:37 addons-952140 kubelet[1298]: E1003 18:29:37.065654    1298 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/57dce88b-cd6c-4f39-babf-2079e2174e05-gcr-creds podName:57dce88b-cd6c-4f39-babf-2079e2174e05 nodeName:}" failed. No retries permitted until 2025-10-03 18:30:41.065637049 +0000 UTC m=+174.666068308 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/57dce88b-cd6c-4f39-babf-2079e2174e05-gcr-creds") pod "registry-creds-764b6fb674-dqntl" (UID: "57dce88b-cd6c-4f39-babf-2079e2174e05") : secret "registry-creds-gcr" not found
	Oct 03 18:29:38 addons-952140 kubelet[1298]: I1003 18:29:38.735178    1298 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Oct 03 18:29:38 addons-952140 kubelet[1298]: I1003 18:29:38.735701    1298 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Oct 03 18:29:39 addons-952140 kubelet[1298]: I1003 18:29:39.306021    1298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-8d4lm" podStartSLOduration=67.809295313 podStartE2EDuration="1m42.305993381s" podCreationTimestamp="2025-10-03 18:27:57 +0000 UTC" firstStartedPulling="2025-10-03 18:29:01.556194686 +0000 UTC m=+75.156625937" lastFinishedPulling="2025-10-03 18:29:36.052892754 +0000 UTC m=+109.653324005" observedRunningTime="2025-10-03 18:29:36.331097516 +0000 UTC m=+109.931528784" watchObservedRunningTime="2025-10-03 18:29:39.305993381 +0000 UTC m=+112.906424632"
	Oct 03 18:29:42 addons-952140 kubelet[1298]: I1003 18:29:42.382140    1298 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-vsbgb" podStartSLOduration=2.749261992 podStartE2EDuration="1m10.382124153s" podCreationTimestamp="2025-10-03 18:28:32 +0000 UTC" firstStartedPulling="2025-10-03 18:28:33.844652855 +0000 UTC m=+47.445084106" lastFinishedPulling="2025-10-03 18:29:41.477515016 +0000 UTC m=+115.077946267" observedRunningTime="2025-10-03 18:29:42.381622377 +0000 UTC m=+115.982053636" watchObservedRunningTime="2025-10-03 18:29:42.382124153 +0000 UTC m=+115.982555412"
	Oct 03 18:29:45 addons-952140 kubelet[1298]: I1003 18:29:45.340739    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6szrn\" (UniqueName: \"kubernetes.io/projected/16cbc4dc-bf5a-40a0-892a-b3483ba80b7d-kube-api-access-6szrn\") pod \"busybox\" (UID: \"16cbc4dc-bf5a-40a0-892a-b3483ba80b7d\") " pod="default/busybox"
	Oct 03 18:29:45 addons-952140 kubelet[1298]: I1003 18:29:45.341344    1298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/16cbc4dc-bf5a-40a0-892a-b3483ba80b7d-gcp-creds\") pod \"busybox\" (UID: \"16cbc4dc-bf5a-40a0-892a-b3483ba80b7d\") " pod="default/busybox"
	Oct 03 18:29:46 addons-952140 kubelet[1298]: I1003 18:29:46.554853    1298 scope.go:117] "RemoveContainer" containerID="c79a52dacfab2a61291462edcd3d1e76a3797ca712370213702729b69036e0f7"
	Oct 03 18:29:46 addons-952140 kubelet[1298]: I1003 18:29:46.567382    1298 scope.go:117] "RemoveContainer" containerID="c73302061d24f4ff312a24a44df6c977ebbfbfb675e26ee1a7a1f29ca3971d99"
	Oct 03 18:29:46 addons-952140 kubelet[1298]: E1003 18:29:46.705861    1298 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/94035f86e12c8ccb379820385b208219f54a6f4e7ed85677bba7a3278445d680/diff" to get inode usage: stat /var/lib/containers/storage/overlay/94035f86e12c8ccb379820385b208219f54a6f4e7ed85677bba7a3278445d680/diff: no such file or directory, extraDiskErr: <nil>
	
	
	==> storage-provisioner [82907fef03cc43b849878194de7aef8c729ee89dcf5fddba29650a239ab81e90] <==
	W1003 18:29:30.498091       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:29:32.500797       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:29:32.505473       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:29:34.511007       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:29:34.519560       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:29:36.523197       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:29:36.533596       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:29:38.552222       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:29:38.557892       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:29:40.561190       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:29:40.567921       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:29:42.571554       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:29:42.576065       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:29:44.579454       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:29:44.584189       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:29:46.589536       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:29:46.601027       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:29:48.615671       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:29:48.635038       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:29:50.637864       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:29:50.642453       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:29:52.645507       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:29:52.653440       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:29:54.659688       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:29:54.664355       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-952140 -n addons-952140
helpers_test.go:269: (dbg) Run:  kubectl --context addons-952140 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-4r899 ingress-nginx-admission-patch-bpnzz registry-creds-764b6fb674-dqntl
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-952140 describe pod ingress-nginx-admission-create-4r899 ingress-nginx-admission-patch-bpnzz registry-creds-764b6fb674-dqntl
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-952140 describe pod ingress-nginx-admission-create-4r899 ingress-nginx-admission-patch-bpnzz registry-creds-764b6fb674-dqntl: exit status 1 (103.641423ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-4r899" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-bpnzz" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-dqntl" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-952140 describe pod ingress-nginx-admission-create-4r899 ingress-nginx-admission-patch-bpnzz registry-creds-764b6fb674-dqntl: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-952140 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-952140 addons disable headlamp --alsologtostderr -v=1: exit status 11 (261.712311ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 18:29:57.204888  293688 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:29:57.205831  293688 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:29:57.205873  293688 out.go:374] Setting ErrFile to fd 2...
	I1003 18:29:57.205895  293688 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:29:57.206207  293688 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-284583/.minikube/bin
	I1003 18:29:57.206562  293688 mustload.go:65] Loading cluster: addons-952140
	I1003 18:29:57.206990  293688 config.go:182] Loaded profile config "addons-952140": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:29:57.207038  293688 addons.go:606] checking whether the cluster is paused
	I1003 18:29:57.207191  293688 config.go:182] Loaded profile config "addons-952140": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:29:57.207230  293688 host.go:66] Checking if "addons-952140" exists ...
	I1003 18:29:57.207782  293688 cli_runner.go:164] Run: docker container inspect addons-952140 --format={{.State.Status}}
	I1003 18:29:57.226122  293688 ssh_runner.go:195] Run: systemctl --version
	I1003 18:29:57.226184  293688 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952140
	I1003 18:29:57.245049  293688 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/addons-952140/id_rsa Username:docker}
	I1003 18:29:57.343554  293688 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1003 18:29:57.343636  293688 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1003 18:29:57.374110  293688 cri.go:89] found id: "a2a54b8525b1b03c3294a286e260174ebeb999c736e2e29750632824346e2b8a"
	I1003 18:29:57.374129  293688 cri.go:89] found id: "764f61b1d1b52574dff121ce3057ed9a2791b059752cb80d76e6a5ae323e3765"
	I1003 18:29:57.374135  293688 cri.go:89] found id: "5520f176a27b0060104f01c653743a97419cd7df90b959dc02f4359563db372f"
	I1003 18:29:57.374139  293688 cri.go:89] found id: "a55dd027b4c2417a4e857716af2ec80adf3ee359efc1fcdea96ae017da8094db"
	I1003 18:29:57.374142  293688 cri.go:89] found id: "d11765424ad977c42ad7828e106df59281b6041a6b85d34d604738d051cc2257"
	I1003 18:29:57.374147  293688 cri.go:89] found id: "ba5695d849b4ff437b5c5a4c73351652ea5b855eb0061d3826ad4a2a76513650"
	I1003 18:29:57.374150  293688 cri.go:89] found id: "351cf9cd8e8f80a1ce058ad47867cc1e9e314f2100ba10ef01326c91fbea576c"
	I1003 18:29:57.374153  293688 cri.go:89] found id: "c2d0db82bc7f2bcfc4af04f3633a094c0e554392449fbf12a24ed377b92f941b"
	I1003 18:29:57.374157  293688 cri.go:89] found id: "5925d6c423d79839f9eb8870977fb293e3c6b1ece77aa59bf7c2a4b120ca3ad3"
	I1003 18:29:57.374163  293688 cri.go:89] found id: "228036e3d30218b16026d557d3264fc361f0c7c42c143fc93a96fd7945d8bdf3"
	I1003 18:29:57.374167  293688 cri.go:89] found id: "d38c57e36e3594ef4f8f3d28db24890c659027ed75977701aa969ce142c27e0e"
	I1003 18:29:57.374170  293688 cri.go:89] found id: "8ab3974a2c302b83e53bc5a243fae87bdec8ed1ca2da979ebcc29dabb8f30fc4"
	I1003 18:29:57.374174  293688 cri.go:89] found id: "70497b5707570324a85bde79dadf41e8e6ded9bd45545ee1a7756ba32eed86d6"
	I1003 18:29:57.374177  293688 cri.go:89] found id: "26742750260bfb48e7909f410307ee53b3dafe6b84bb3a467c505e24d28d4fe1"
	I1003 18:29:57.374181  293688 cri.go:89] found id: "7099c81ca982b78bfa4dd5784e69f027f40fb02b99bce69ec1f792090be6a50b"
	I1003 18:29:57.374192  293688 cri.go:89] found id: "2657f869bb8529138f74b802beedcd922a626ac30c50e54c72731eaff1b930c0"
	I1003 18:29:57.374195  293688 cri.go:89] found id: "82907fef03cc43b849878194de7aef8c729ee89dcf5fddba29650a239ab81e90"
	I1003 18:29:57.374200  293688 cri.go:89] found id: "28257b7548dee5496025c494fc69f7d27b158c004459fe9cf7e145244cc402b4"
	I1003 18:29:57.374203  293688 cri.go:89] found id: "1a59139ec0face1693267071ca3c3ba3e8eff397418ffbf25f3682c68eee244a"
	I1003 18:29:57.374206  293688 cri.go:89] found id: "23bd53ece83d04d894e5fc60fda04a6f8bdfe8d6c59ffad6c4dcacc168ec4ed8"
	I1003 18:29:57.374211  293688 cri.go:89] found id: "1cbcaf90a28158f2a4d5495c4b92561650195912704daec05dcf1d9b56429e5c"
	I1003 18:29:57.374214  293688 cri.go:89] found id: "22981c6dff74a1d10571b76dae9b7bbbb33ca3843ab35927e1e5997100c5be1c"
	I1003 18:29:57.374217  293688 cri.go:89] found id: "e937e437e1e79c6bcbb92c82ee9849b6f8ceb2c5980d23b084e27a6fb88ab45a"
	I1003 18:29:57.374220  293688 cri.go:89] found id: ""
	I1003 18:29:57.374270  293688 ssh_runner.go:195] Run: sudo runc list -f json
	I1003 18:29:57.389714  293688 out.go:203] 
	W1003 18:29:57.392628  293688 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-03T18:29:57Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-03T18:29:57Z" level=error msg="open /run/runc: no such file or directory"
	
	W1003 18:29:57.392654  293688 out.go:285] * 
	* 
	W1003 18:29:57.399046  293688 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 18:29:57.402069  293688 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-arm64 -p addons-952140 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (3.16s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-85f6b7fc65-thvpj" [03428b31-0f0e-4037-8acd-fb9a6d3f34a5] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003691188s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-952140 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-952140 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (247.00974ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 18:30:16.074202  294154 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:30:16.075050  294154 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:30:16.075065  294154 out.go:374] Setting ErrFile to fd 2...
	I1003 18:30:16.075072  294154 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:30:16.075378  294154 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-284583/.minikube/bin
	I1003 18:30:16.075743  294154 mustload.go:65] Loading cluster: addons-952140
	I1003 18:30:16.076161  294154 config.go:182] Loaded profile config "addons-952140": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:30:16.076183  294154 addons.go:606] checking whether the cluster is paused
	I1003 18:30:16.076333  294154 config.go:182] Loaded profile config "addons-952140": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:30:16.076352  294154 host.go:66] Checking if "addons-952140" exists ...
	I1003 18:30:16.076958  294154 cli_runner.go:164] Run: docker container inspect addons-952140 --format={{.State.Status}}
	I1003 18:30:16.095325  294154 ssh_runner.go:195] Run: systemctl --version
	I1003 18:30:16.095393  294154 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952140
	I1003 18:30:16.112930  294154 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/addons-952140/id_rsa Username:docker}
	I1003 18:30:16.207571  294154 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1003 18:30:16.207660  294154 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1003 18:30:16.240689  294154 cri.go:89] found id: "a2a54b8525b1b03c3294a286e260174ebeb999c736e2e29750632824346e2b8a"
	I1003 18:30:16.240710  294154 cri.go:89] found id: "764f61b1d1b52574dff121ce3057ed9a2791b059752cb80d76e6a5ae323e3765"
	I1003 18:30:16.240716  294154 cri.go:89] found id: "5520f176a27b0060104f01c653743a97419cd7df90b959dc02f4359563db372f"
	I1003 18:30:16.240747  294154 cri.go:89] found id: "a55dd027b4c2417a4e857716af2ec80adf3ee359efc1fcdea96ae017da8094db"
	I1003 18:30:16.240752  294154 cri.go:89] found id: "d11765424ad977c42ad7828e106df59281b6041a6b85d34d604738d051cc2257"
	I1003 18:30:16.240764  294154 cri.go:89] found id: "ba5695d849b4ff437b5c5a4c73351652ea5b855eb0061d3826ad4a2a76513650"
	I1003 18:30:16.240772  294154 cri.go:89] found id: "351cf9cd8e8f80a1ce058ad47867cc1e9e314f2100ba10ef01326c91fbea576c"
	I1003 18:30:16.240775  294154 cri.go:89] found id: "c2d0db82bc7f2bcfc4af04f3633a094c0e554392449fbf12a24ed377b92f941b"
	I1003 18:30:16.240778  294154 cri.go:89] found id: "5925d6c423d79839f9eb8870977fb293e3c6b1ece77aa59bf7c2a4b120ca3ad3"
	I1003 18:30:16.240785  294154 cri.go:89] found id: "228036e3d30218b16026d557d3264fc361f0c7c42c143fc93a96fd7945d8bdf3"
	I1003 18:30:16.240788  294154 cri.go:89] found id: "d38c57e36e3594ef4f8f3d28db24890c659027ed75977701aa969ce142c27e0e"
	I1003 18:30:16.240792  294154 cri.go:89] found id: "8ab3974a2c302b83e53bc5a243fae87bdec8ed1ca2da979ebcc29dabb8f30fc4"
	I1003 18:30:16.240795  294154 cri.go:89] found id: "70497b5707570324a85bde79dadf41e8e6ded9bd45545ee1a7756ba32eed86d6"
	I1003 18:30:16.240798  294154 cri.go:89] found id: "26742750260bfb48e7909f410307ee53b3dafe6b84bb3a467c505e24d28d4fe1"
	I1003 18:30:16.240801  294154 cri.go:89] found id: "7099c81ca982b78bfa4dd5784e69f027f40fb02b99bce69ec1f792090be6a50b"
	I1003 18:30:16.240806  294154 cri.go:89] found id: "2657f869bb8529138f74b802beedcd922a626ac30c50e54c72731eaff1b930c0"
	I1003 18:30:16.240810  294154 cri.go:89] found id: "82907fef03cc43b849878194de7aef8c729ee89dcf5fddba29650a239ab81e90"
	I1003 18:30:16.240814  294154 cri.go:89] found id: "28257b7548dee5496025c494fc69f7d27b158c004459fe9cf7e145244cc402b4"
	I1003 18:30:16.240817  294154 cri.go:89] found id: "1a59139ec0face1693267071ca3c3ba3e8eff397418ffbf25f3682c68eee244a"
	I1003 18:30:16.240820  294154 cri.go:89] found id: "23bd53ece83d04d894e5fc60fda04a6f8bdfe8d6c59ffad6c4dcacc168ec4ed8"
	I1003 18:30:16.240824  294154 cri.go:89] found id: "1cbcaf90a28158f2a4d5495c4b92561650195912704daec05dcf1d9b56429e5c"
	I1003 18:30:16.240827  294154 cri.go:89] found id: "22981c6dff74a1d10571b76dae9b7bbbb33ca3843ab35927e1e5997100c5be1c"
	I1003 18:30:16.240830  294154 cri.go:89] found id: "e937e437e1e79c6bcbb92c82ee9849b6f8ceb2c5980d23b084e27a6fb88ab45a"
	I1003 18:30:16.240833  294154 cri.go:89] found id: ""
	I1003 18:30:16.240891  294154 ssh_runner.go:195] Run: sudo runc list -f json
	I1003 18:30:16.256097  294154 out.go:203] 
	W1003 18:30:16.259191  294154 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-03T18:30:16Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-03T18:30:16Z" level=error msg="open /run/runc: no such file or directory"
	
	W1003 18:30:16.259219  294154 out.go:285] * 
	* 
	W1003 18:30:16.265629  294154 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 18:30:16.268580  294154 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-arm64 -p addons-952140 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.26s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (9.44s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-952140 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-952140 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-952140 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-952140 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-952140 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-952140 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-952140 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-952140 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [34ff8d67-a576-4dd6-b8fc-9a48d346552c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [34ff8d67-a576-4dd6-b8fc-9a48d346552c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [34ff8d67-a576-4dd6-b8fc-9a48d346552c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003533916s
addons_test.go:967: (dbg) Run:  kubectl --context addons-952140 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-952140 ssh "cat /opt/local-path-provisioner/pvc-a5fb303d-41e5-4aba-bf5e-80bf1ea770ef_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-952140 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-952140 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-952140 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-952140 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (281.047321ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 18:30:18.204456  294308 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:30:18.205384  294308 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:30:18.205403  294308 out.go:374] Setting ErrFile to fd 2...
	I1003 18:30:18.205410  294308 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:30:18.205710  294308 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-284583/.minikube/bin
	I1003 18:30:18.206008  294308 mustload.go:65] Loading cluster: addons-952140
	I1003 18:30:18.206380  294308 config.go:182] Loaded profile config "addons-952140": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:30:18.206398  294308 addons.go:606] checking whether the cluster is paused
	I1003 18:30:18.206499  294308 config.go:182] Loaded profile config "addons-952140": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:30:18.206515  294308 host.go:66] Checking if "addons-952140" exists ...
	I1003 18:30:18.207003  294308 cli_runner.go:164] Run: docker container inspect addons-952140 --format={{.State.Status}}
	I1003 18:30:18.224820  294308 ssh_runner.go:195] Run: systemctl --version
	I1003 18:30:18.224877  294308 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952140
	I1003 18:30:18.242104  294308 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/addons-952140/id_rsa Username:docker}
	I1003 18:30:18.339176  294308 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1003 18:30:18.339263  294308 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1003 18:30:18.382482  294308 cri.go:89] found id: "a2a54b8525b1b03c3294a286e260174ebeb999c736e2e29750632824346e2b8a"
	I1003 18:30:18.382511  294308 cri.go:89] found id: "764f61b1d1b52574dff121ce3057ed9a2791b059752cb80d76e6a5ae323e3765"
	I1003 18:30:18.382517  294308 cri.go:89] found id: "5520f176a27b0060104f01c653743a97419cd7df90b959dc02f4359563db372f"
	I1003 18:30:18.382521  294308 cri.go:89] found id: "a55dd027b4c2417a4e857716af2ec80adf3ee359efc1fcdea96ae017da8094db"
	I1003 18:30:18.382525  294308 cri.go:89] found id: "d11765424ad977c42ad7828e106df59281b6041a6b85d34d604738d051cc2257"
	I1003 18:30:18.382528  294308 cri.go:89] found id: "ba5695d849b4ff437b5c5a4c73351652ea5b855eb0061d3826ad4a2a76513650"
	I1003 18:30:18.382531  294308 cri.go:89] found id: "351cf9cd8e8f80a1ce058ad47867cc1e9e314f2100ba10ef01326c91fbea576c"
	I1003 18:30:18.382534  294308 cri.go:89] found id: "c2d0db82bc7f2bcfc4af04f3633a094c0e554392449fbf12a24ed377b92f941b"
	I1003 18:30:18.382559  294308 cri.go:89] found id: "5925d6c423d79839f9eb8870977fb293e3c6b1ece77aa59bf7c2a4b120ca3ad3"
	I1003 18:30:18.382572  294308 cri.go:89] found id: "228036e3d30218b16026d557d3264fc361f0c7c42c143fc93a96fd7945d8bdf3"
	I1003 18:30:18.382575  294308 cri.go:89] found id: "d38c57e36e3594ef4f8f3d28db24890c659027ed75977701aa969ce142c27e0e"
	I1003 18:30:18.382579  294308 cri.go:89] found id: "8ab3974a2c302b83e53bc5a243fae87bdec8ed1ca2da979ebcc29dabb8f30fc4"
	I1003 18:30:18.382582  294308 cri.go:89] found id: "70497b5707570324a85bde79dadf41e8e6ded9bd45545ee1a7756ba32eed86d6"
	I1003 18:30:18.382586  294308 cri.go:89] found id: "26742750260bfb48e7909f410307ee53b3dafe6b84bb3a467c505e24d28d4fe1"
	I1003 18:30:18.382589  294308 cri.go:89] found id: "7099c81ca982b78bfa4dd5784e69f027f40fb02b99bce69ec1f792090be6a50b"
	I1003 18:30:18.382599  294308 cri.go:89] found id: "2657f869bb8529138f74b802beedcd922a626ac30c50e54c72731eaff1b930c0"
	I1003 18:30:18.382607  294308 cri.go:89] found id: "82907fef03cc43b849878194de7aef8c729ee89dcf5fddba29650a239ab81e90"
	I1003 18:30:18.382612  294308 cri.go:89] found id: "28257b7548dee5496025c494fc69f7d27b158c004459fe9cf7e145244cc402b4"
	I1003 18:30:18.382616  294308 cri.go:89] found id: "1a59139ec0face1693267071ca3c3ba3e8eff397418ffbf25f3682c68eee244a"
	I1003 18:30:18.382619  294308 cri.go:89] found id: "23bd53ece83d04d894e5fc60fda04a6f8bdfe8d6c59ffad6c4dcacc168ec4ed8"
	I1003 18:30:18.382637  294308 cri.go:89] found id: "1cbcaf90a28158f2a4d5495c4b92561650195912704daec05dcf1d9b56429e5c"
	I1003 18:30:18.382646  294308 cri.go:89] found id: "22981c6dff74a1d10571b76dae9b7bbbb33ca3843ab35927e1e5997100c5be1c"
	I1003 18:30:18.382649  294308 cri.go:89] found id: "e937e437e1e79c6bcbb92c82ee9849b6f8ceb2c5980d23b084e27a6fb88ab45a"
	I1003 18:30:18.382652  294308 cri.go:89] found id: ""
	I1003 18:30:18.382749  294308 ssh_runner.go:195] Run: sudo runc list -f json
	I1003 18:30:18.401023  294308 out.go:203] 
	W1003 18:30:18.404162  294308 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-03T18:30:18Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-03T18:30:18Z" level=error msg="open /run/runc: no such file or directory"
	
	W1003 18:30:18.404187  294308 out.go:285] * 
	* 
	W1003 18:30:18.410723  294308 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 18:30:18.415035  294308 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-arm64 -p addons-952140 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (9.44s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.31s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-84v2d" [c0869084-f969-40cf-8475-57eedeb02a93] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003381018s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-952140 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-952140 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (302.263041ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 18:30:08.746640  293853 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:30:08.747281  293853 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:30:08.747298  293853 out.go:374] Setting ErrFile to fd 2...
	I1003 18:30:08.747304  293853 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:30:08.747618  293853 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-284583/.minikube/bin
	I1003 18:30:08.747913  293853 mustload.go:65] Loading cluster: addons-952140
	I1003 18:30:08.748312  293853 config.go:182] Loaded profile config "addons-952140": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:30:08.748330  293853 addons.go:606] checking whether the cluster is paused
	I1003 18:30:08.748433  293853 config.go:182] Loaded profile config "addons-952140": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:30:08.748448  293853 host.go:66] Checking if "addons-952140" exists ...
	I1003 18:30:08.748949  293853 cli_runner.go:164] Run: docker container inspect addons-952140 --format={{.State.Status}}
	I1003 18:30:08.774236  293853 ssh_runner.go:195] Run: systemctl --version
	I1003 18:30:08.774332  293853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952140
	I1003 18:30:08.807969  293853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/addons-952140/id_rsa Username:docker}
	I1003 18:30:08.907383  293853 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1003 18:30:08.907488  293853 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1003 18:30:08.946600  293853 cri.go:89] found id: "a2a54b8525b1b03c3294a286e260174ebeb999c736e2e29750632824346e2b8a"
	I1003 18:30:08.946624  293853 cri.go:89] found id: "764f61b1d1b52574dff121ce3057ed9a2791b059752cb80d76e6a5ae323e3765"
	I1003 18:30:08.946629  293853 cri.go:89] found id: "5520f176a27b0060104f01c653743a97419cd7df90b959dc02f4359563db372f"
	I1003 18:30:08.946633  293853 cri.go:89] found id: "a55dd027b4c2417a4e857716af2ec80adf3ee359efc1fcdea96ae017da8094db"
	I1003 18:30:08.946637  293853 cri.go:89] found id: "d11765424ad977c42ad7828e106df59281b6041a6b85d34d604738d051cc2257"
	I1003 18:30:08.946641  293853 cri.go:89] found id: "ba5695d849b4ff437b5c5a4c73351652ea5b855eb0061d3826ad4a2a76513650"
	I1003 18:30:08.946644  293853 cri.go:89] found id: "351cf9cd8e8f80a1ce058ad47867cc1e9e314f2100ba10ef01326c91fbea576c"
	I1003 18:30:08.946647  293853 cri.go:89] found id: "c2d0db82bc7f2bcfc4af04f3633a094c0e554392449fbf12a24ed377b92f941b"
	I1003 18:30:08.946649  293853 cri.go:89] found id: "5925d6c423d79839f9eb8870977fb293e3c6b1ece77aa59bf7c2a4b120ca3ad3"
	I1003 18:30:08.946664  293853 cri.go:89] found id: "228036e3d30218b16026d557d3264fc361f0c7c42c143fc93a96fd7945d8bdf3"
	I1003 18:30:08.946668  293853 cri.go:89] found id: "d38c57e36e3594ef4f8f3d28db24890c659027ed75977701aa969ce142c27e0e"
	I1003 18:30:08.946672  293853 cri.go:89] found id: "8ab3974a2c302b83e53bc5a243fae87bdec8ed1ca2da979ebcc29dabb8f30fc4"
	I1003 18:30:08.946676  293853 cri.go:89] found id: "70497b5707570324a85bde79dadf41e8e6ded9bd45545ee1a7756ba32eed86d6"
	I1003 18:30:08.946679  293853 cri.go:89] found id: "26742750260bfb48e7909f410307ee53b3dafe6b84bb3a467c505e24d28d4fe1"
	I1003 18:30:08.946682  293853 cri.go:89] found id: "7099c81ca982b78bfa4dd5784e69f027f40fb02b99bce69ec1f792090be6a50b"
	I1003 18:30:08.946693  293853 cri.go:89] found id: "2657f869bb8529138f74b802beedcd922a626ac30c50e54c72731eaff1b930c0"
	I1003 18:30:08.946701  293853 cri.go:89] found id: "82907fef03cc43b849878194de7aef8c729ee89dcf5fddba29650a239ab81e90"
	I1003 18:30:08.946705  293853 cri.go:89] found id: "28257b7548dee5496025c494fc69f7d27b158c004459fe9cf7e145244cc402b4"
	I1003 18:30:08.946708  293853 cri.go:89] found id: "1a59139ec0face1693267071ca3c3ba3e8eff397418ffbf25f3682c68eee244a"
	I1003 18:30:08.946711  293853 cri.go:89] found id: "23bd53ece83d04d894e5fc60fda04a6f8bdfe8d6c59ffad6c4dcacc168ec4ed8"
	I1003 18:30:08.946716  293853 cri.go:89] found id: "1cbcaf90a28158f2a4d5495c4b92561650195912704daec05dcf1d9b56429e5c"
	I1003 18:30:08.946719  293853 cri.go:89] found id: "22981c6dff74a1d10571b76dae9b7bbbb33ca3843ab35927e1e5997100c5be1c"
	I1003 18:30:08.946722  293853 cri.go:89] found id: "e937e437e1e79c6bcbb92c82ee9849b6f8ceb2c5980d23b084e27a6fb88ab45a"
	I1003 18:30:08.946725  293853 cri.go:89] found id: ""
	I1003 18:30:08.946776  293853 ssh_runner.go:195] Run: sudo runc list -f json
	I1003 18:30:08.961644  293853 out.go:203] 
	W1003 18:30:08.964578  293853 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-03T18:30:08Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-03T18:30:08Z" level=error msg="open /run/runc: no such file or directory"
	
	W1003 18:30:08.964619  293853 out.go:285] * 
	* 
	W1003 18:30:08.971171  293853 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 18:30:08.974098  293853 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-arm64 -p addons-952140 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.31s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-ccz5v" [552c800a-7def-45ba-bf7e-9d161365566e] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003608956s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-952140 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-952140 addons disable yakd --alsologtostderr -v=1: exit status 11 (257.814129ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 18:30:03.462872  293748 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:30:03.463771  293748 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:30:03.463808  293748 out.go:374] Setting ErrFile to fd 2...
	I1003 18:30:03.463827  293748 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:30:03.464095  293748 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-284583/.minikube/bin
	I1003 18:30:03.464411  293748 mustload.go:65] Loading cluster: addons-952140
	I1003 18:30:03.464881  293748 config.go:182] Loaded profile config "addons-952140": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:30:03.464926  293748 addons.go:606] checking whether the cluster is paused
	I1003 18:30:03.465061  293748 config.go:182] Loaded profile config "addons-952140": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:30:03.465098  293748 host.go:66] Checking if "addons-952140" exists ...
	I1003 18:30:03.465667  293748 cli_runner.go:164] Run: docker container inspect addons-952140 --format={{.State.Status}}
	I1003 18:30:03.483976  293748 ssh_runner.go:195] Run: systemctl --version
	I1003 18:30:03.484038  293748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-952140
	I1003 18:30:03.506236  293748 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/addons-952140/id_rsa Username:docker}
	I1003 18:30:03.603836  293748 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1003 18:30:03.603939  293748 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1003 18:30:03.635336  293748 cri.go:89] found id: "a2a54b8525b1b03c3294a286e260174ebeb999c736e2e29750632824346e2b8a"
	I1003 18:30:03.635399  293748 cri.go:89] found id: "764f61b1d1b52574dff121ce3057ed9a2791b059752cb80d76e6a5ae323e3765"
	I1003 18:30:03.635419  293748 cri.go:89] found id: "5520f176a27b0060104f01c653743a97419cd7df90b959dc02f4359563db372f"
	I1003 18:30:03.635440  293748 cri.go:89] found id: "a55dd027b4c2417a4e857716af2ec80adf3ee359efc1fcdea96ae017da8094db"
	I1003 18:30:03.635460  293748 cri.go:89] found id: "d11765424ad977c42ad7828e106df59281b6041a6b85d34d604738d051cc2257"
	I1003 18:30:03.635495  293748 cri.go:89] found id: "ba5695d849b4ff437b5c5a4c73351652ea5b855eb0061d3826ad4a2a76513650"
	I1003 18:30:03.635512  293748 cri.go:89] found id: "351cf9cd8e8f80a1ce058ad47867cc1e9e314f2100ba10ef01326c91fbea576c"
	I1003 18:30:03.635531  293748 cri.go:89] found id: "c2d0db82bc7f2bcfc4af04f3633a094c0e554392449fbf12a24ed377b92f941b"
	I1003 18:30:03.635551  293748 cri.go:89] found id: "5925d6c423d79839f9eb8870977fb293e3c6b1ece77aa59bf7c2a4b120ca3ad3"
	I1003 18:30:03.635584  293748 cri.go:89] found id: "228036e3d30218b16026d557d3264fc361f0c7c42c143fc93a96fd7945d8bdf3"
	I1003 18:30:03.635610  293748 cri.go:89] found id: "d38c57e36e3594ef4f8f3d28db24890c659027ed75977701aa969ce142c27e0e"
	I1003 18:30:03.635629  293748 cri.go:89] found id: "8ab3974a2c302b83e53bc5a243fae87bdec8ed1ca2da979ebcc29dabb8f30fc4"
	I1003 18:30:03.635648  293748 cri.go:89] found id: "70497b5707570324a85bde79dadf41e8e6ded9bd45545ee1a7756ba32eed86d6"
	I1003 18:30:03.635668  293748 cri.go:89] found id: "26742750260bfb48e7909f410307ee53b3dafe6b84bb3a467c505e24d28d4fe1"
	I1003 18:30:03.635696  293748 cri.go:89] found id: "7099c81ca982b78bfa4dd5784e69f027f40fb02b99bce69ec1f792090be6a50b"
	I1003 18:30:03.635733  293748 cri.go:89] found id: "2657f869bb8529138f74b802beedcd922a626ac30c50e54c72731eaff1b930c0"
	I1003 18:30:03.635761  293748 cri.go:89] found id: "82907fef03cc43b849878194de7aef8c729ee89dcf5fddba29650a239ab81e90"
	I1003 18:30:03.635786  293748 cri.go:89] found id: "28257b7548dee5496025c494fc69f7d27b158c004459fe9cf7e145244cc402b4"
	I1003 18:30:03.635820  293748 cri.go:89] found id: "1a59139ec0face1693267071ca3c3ba3e8eff397418ffbf25f3682c68eee244a"
	I1003 18:30:03.635833  293748 cri.go:89] found id: "23bd53ece83d04d894e5fc60fda04a6f8bdfe8d6c59ffad6c4dcacc168ec4ed8"
	I1003 18:30:03.635839  293748 cri.go:89] found id: "1cbcaf90a28158f2a4d5495c4b92561650195912704daec05dcf1d9b56429e5c"
	I1003 18:30:03.635843  293748 cri.go:89] found id: "22981c6dff74a1d10571b76dae9b7bbbb33ca3843ab35927e1e5997100c5be1c"
	I1003 18:30:03.635846  293748 cri.go:89] found id: "e937e437e1e79c6bcbb92c82ee9849b6f8ceb2c5980d23b084e27a6fb88ab45a"
	I1003 18:30:03.635848  293748 cri.go:89] found id: ""
	I1003 18:30:03.635909  293748 ssh_runner.go:195] Run: sudo runc list -f json
	I1003 18:30:03.651822  293748 out.go:203] 
	W1003 18:30:03.654855  293748 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-03T18:30:03Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-03T18:30:03Z" level=error msg="open /run/runc: no such file or directory"
	
	W1003 18:30:03.654911  293748 out.go:285] * 
	* 
	W1003 18:30:03.661609  293748 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 18:30:03.664667  293748 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-arm64 -p addons-952140 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.26s)

                                                
                                    
x
+
TestForceSystemdFlag (516.68s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-855981 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1003 19:24:28.488510  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 19:24:45.417208  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 19:24:55.036106  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/functional-680560/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p force-systemd-flag-855981 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: exit status 80 (8m32.865875443s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-855981] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21625
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21625-284583/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-284583/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "force-systemd-flag-855981" primary control-plane node in "force-systemd-flag-855981" cluster
	* Pulling base image v0.0.48-1759382731-21643 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 19:23:55.109833  448124 out.go:360] Setting OutFile to fd 1 ...
	I1003 19:23:55.110056  448124 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 19:23:55.110084  448124 out.go:374] Setting ErrFile to fd 2...
	I1003 19:23:55.110105  448124 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 19:23:55.110424  448124 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-284583/.minikube/bin
	I1003 19:23:55.110917  448124 out.go:368] Setting JSON to false
	I1003 19:23:55.111942  448124 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7587,"bootTime":1759511849,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1003 19:23:55.112056  448124 start.go:140] virtualization:  
	I1003 19:23:55.115838  448124 out.go:179] * [force-systemd-flag-855981] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1003 19:23:55.120389  448124 out.go:179]   - MINIKUBE_LOCATION=21625
	I1003 19:23:55.120460  448124 notify.go:220] Checking for updates...
	I1003 19:23:55.127029  448124 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 19:23:55.130868  448124 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21625-284583/kubeconfig
	I1003 19:23:55.134106  448124 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-284583/.minikube
	I1003 19:23:55.137225  448124 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1003 19:23:55.140336  448124 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 19:23:55.143786  448124 config.go:182] Loaded profile config "kubernetes-upgrade-629875": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 19:23:55.143899  448124 driver.go:421] Setting default libvirt URI to qemu:///system
	I1003 19:23:55.176268  448124 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1003 19:23:55.176407  448124 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 19:23:55.243015  448124 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-03 19:23:55.233327527 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1003 19:23:55.243132  448124 docker.go:318] overlay module found
	I1003 19:23:55.246251  448124 out.go:179] * Using the docker driver based on user configuration
	I1003 19:23:55.249216  448124 start.go:304] selected driver: docker
	I1003 19:23:55.249236  448124 start.go:924] validating driver "docker" against <nil>
	I1003 19:23:55.249250  448124 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 19:23:55.250006  448124 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 19:23:55.301093  448124 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-03 19:23:55.292202303 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1003 19:23:55.301248  448124 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1003 19:23:55.301490  448124 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1003 19:23:55.304389  448124 out.go:179] * Using Docker driver with root privileges
	I1003 19:23:55.307199  448124 cni.go:84] Creating CNI manager for ""
	I1003 19:23:55.307267  448124 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 19:23:55.307281  448124 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1003 19:23:55.307361  448124 start.go:348] cluster config:
	{Name:force-systemd-flag-855981 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-855981 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 19:23:55.310450  448124 out.go:179] * Starting "force-systemd-flag-855981" primary control-plane node in "force-systemd-flag-855981" cluster
	I1003 19:23:55.313278  448124 cache.go:123] Beginning downloading kic base image for docker with crio
	I1003 19:23:55.316172  448124 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1003 19:23:55.318919  448124 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 19:23:55.318945  448124 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1003 19:23:55.318975  448124 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21625-284583/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1003 19:23:55.318993  448124 cache.go:58] Caching tarball of preloaded images
	I1003 19:23:55.319071  448124 preload.go:233] Found /home/jenkins/minikube-integration/21625-284583/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1003 19:23:55.319081  448124 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1003 19:23:55.319180  448124 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-flag-855981/config.json ...
	I1003 19:23:55.319206  448124 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-flag-855981/config.json: {Name:mk6366da5e4d1fc5aebedea4ebab298d707f3485 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:23:55.339099  448124 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1003 19:23:55.339123  448124 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1003 19:23:55.339141  448124 cache.go:232] Successfully downloaded all kic artifacts
	I1003 19:23:55.339163  448124 start.go:360] acquireMachinesLock for force-systemd-flag-855981: {Name:mk9f25941007180cf01517337637c788f367dc37 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 19:23:55.339275  448124 start.go:364] duration metric: took 94.492µs to acquireMachinesLock for "force-systemd-flag-855981"
	I1003 19:23:55.339304  448124 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-855981 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-855981 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1003 19:23:55.339373  448124 start.go:125] createHost starting for "" (driver="docker")
	I1003 19:23:55.342931  448124 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1003 19:23:55.343162  448124 start.go:159] libmachine.API.Create for "force-systemd-flag-855981" (driver="docker")
	I1003 19:23:55.343212  448124 client.go:168] LocalClient.Create starting
	I1003 19:23:55.343303  448124 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem
	I1003 19:23:55.343342  448124 main.go:141] libmachine: Decoding PEM data...
	I1003 19:23:55.343362  448124 main.go:141] libmachine: Parsing certificate...
	I1003 19:23:55.343416  448124 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem
	I1003 19:23:55.343443  448124 main.go:141] libmachine: Decoding PEM data...
	I1003 19:23:55.343457  448124 main.go:141] libmachine: Parsing certificate...
	I1003 19:23:55.343835  448124 cli_runner.go:164] Run: docker network inspect force-systemd-flag-855981 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1003 19:23:55.359062  448124 cli_runner.go:211] docker network inspect force-systemd-flag-855981 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1003 19:23:55.359156  448124 network_create.go:284] running [docker network inspect force-systemd-flag-855981] to gather additional debugging logs...
	I1003 19:23:55.359175  448124 cli_runner.go:164] Run: docker network inspect force-systemd-flag-855981
	W1003 19:23:55.375668  448124 cli_runner.go:211] docker network inspect force-systemd-flag-855981 returned with exit code 1
	I1003 19:23:55.375697  448124 network_create.go:287] error running [docker network inspect force-systemd-flag-855981]: docker network inspect force-systemd-flag-855981: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-855981 not found
	I1003 19:23:55.375709  448124 network_create.go:289] output of [docker network inspect force-systemd-flag-855981]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-855981 not found
	
	** /stderr **
	I1003 19:23:55.375817  448124 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 19:23:55.392569  448124 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-3a8a28910ba8 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6e:7a:d0:f8:54:63} reservation:<nil>}
	I1003 19:23:55.393053  448124 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-157403cbb468 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:8a:ee:cb:12:bf:d0} reservation:<nil>}
	I1003 19:23:55.393302  448124 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8d1e24f7a986 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:9e:1b:b1:d8:1a:13} reservation:<nil>}
	I1003 19:23:55.393821  448124 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019a9790}
	I1003 19:23:55.393845  448124 network_create.go:124] attempt to create docker network force-systemd-flag-855981 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1003 19:23:55.393915  448124 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-855981 force-systemd-flag-855981
	I1003 19:23:55.454171  448124 network_create.go:108] docker network force-systemd-flag-855981 192.168.76.0/24 created
	I1003 19:23:55.454200  448124 kic.go:121] calculated static IP "192.168.76.2" for the "force-systemd-flag-855981" container
	I1003 19:23:55.454271  448124 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1003 19:23:55.471112  448124 cli_runner.go:164] Run: docker volume create force-systemd-flag-855981 --label name.minikube.sigs.k8s.io=force-systemd-flag-855981 --label created_by.minikube.sigs.k8s.io=true
	I1003 19:23:55.488459  448124 oci.go:103] Successfully created a docker volume force-systemd-flag-855981
	I1003 19:23:55.488547  448124 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-855981-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-855981 --entrypoint /usr/bin/test -v force-systemd-flag-855981:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1003 19:23:56.023839  448124 oci.go:107] Successfully prepared a docker volume force-systemd-flag-855981
	I1003 19:23:56.023893  448124 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 19:23:56.023923  448124 kic.go:194] Starting extracting preloaded images to volume ...
	I1003 19:23:56.024015  448124 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21625-284583/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-855981:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1003 19:24:00.681122  448124 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21625-284583/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-855981:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.657049014s)
	I1003 19:24:00.681159  448124 kic.go:203] duration metric: took 4.657240214s to extract preloaded images to volume ...
	W1003 19:24:00.681298  448124 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1003 19:24:00.681427  448124 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1003 19:24:00.739414  448124 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-855981 --name force-systemd-flag-855981 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-855981 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-855981 --network force-systemd-flag-855981 --ip 192.168.76.2 --volume force-systemd-flag-855981:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1003 19:24:01.055159  448124 cli_runner.go:164] Run: docker container inspect force-systemd-flag-855981 --format={{.State.Running}}
	I1003 19:24:01.074666  448124 cli_runner.go:164] Run: docker container inspect force-systemd-flag-855981 --format={{.State.Status}}
	I1003 19:24:01.097670  448124 cli_runner.go:164] Run: docker exec force-systemd-flag-855981 stat /var/lib/dpkg/alternatives/iptables
	I1003 19:24:01.167967  448124 oci.go:144] the created container "force-systemd-flag-855981" has a running status.
	I1003 19:24:01.168004  448124 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21625-284583/.minikube/machines/force-systemd-flag-855981/id_rsa...
	I1003 19:24:01.838040  448124 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-284583/.minikube/machines/force-systemd-flag-855981/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1003 19:24:01.838089  448124 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21625-284583/.minikube/machines/force-systemd-flag-855981/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1003 19:24:01.857061  448124 cli_runner.go:164] Run: docker container inspect force-systemd-flag-855981 --format={{.State.Status}}
	I1003 19:24:01.873301  448124 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1003 19:24:01.873325  448124 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-855981 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1003 19:24:01.913330  448124 cli_runner.go:164] Run: docker container inspect force-systemd-flag-855981 --format={{.State.Status}}
	I1003 19:24:01.930504  448124 machine.go:93] provisionDockerMachine start ...
	I1003 19:24:01.930623  448124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-855981
	I1003 19:24:01.947878  448124 main.go:141] libmachine: Using SSH client type: native
	I1003 19:24:01.948225  448124 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33398 <nil> <nil>}
	I1003 19:24:01.948241  448124 main.go:141] libmachine: About to run SSH command:
	hostname
	I1003 19:24:01.948910  448124 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1003 19:24:05.116882  448124 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-855981
	
	I1003 19:24:05.116919  448124 ubuntu.go:182] provisioning hostname "force-systemd-flag-855981"
	I1003 19:24:05.117025  448124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-855981
	I1003 19:24:05.136588  448124 main.go:141] libmachine: Using SSH client type: native
	I1003 19:24:05.136942  448124 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33398 <nil> <nil>}
	I1003 19:24:05.136962  448124 main.go:141] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-855981 && echo "force-systemd-flag-855981" | sudo tee /etc/hostname
	I1003 19:24:05.278987  448124 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-855981
	
	I1003 19:24:05.279099  448124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-855981
	I1003 19:24:05.298469  448124 main.go:141] libmachine: Using SSH client type: native
	I1003 19:24:05.298776  448124 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33398 <nil> <nil>}
	I1003 19:24:05.298799  448124 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-855981' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-855981/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-855981' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 19:24:05.437358  448124 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 19:24:05.437383  448124 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21625-284583/.minikube CaCertPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21625-284583/.minikube}
	I1003 19:24:05.437418  448124 ubuntu.go:190] setting up certificates
	I1003 19:24:05.437427  448124 provision.go:84] configureAuth start
	I1003 19:24:05.437491  448124 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-855981
	I1003 19:24:05.456459  448124 provision.go:143] copyHostCerts
	I1003 19:24:05.456500  448124 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21625-284583/.minikube/ca.pem
	I1003 19:24:05.456533  448124 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-284583/.minikube/ca.pem, removing ...
	I1003 19:24:05.456540  448124 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-284583/.minikube/ca.pem
	I1003 19:24:05.456628  448124 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21625-284583/.minikube/ca.pem (1082 bytes)
	I1003 19:24:05.456755  448124 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21625-284583/.minikube/cert.pem
	I1003 19:24:05.456795  448124 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-284583/.minikube/cert.pem, removing ...
	I1003 19:24:05.456801  448124 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-284583/.minikube/cert.pem
	I1003 19:24:05.456836  448124 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21625-284583/.minikube/cert.pem (1123 bytes)
	I1003 19:24:05.456919  448124 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21625-284583/.minikube/key.pem
	I1003 19:24:05.456942  448124 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-284583/.minikube/key.pem, removing ...
	I1003 19:24:05.456950  448124 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-284583/.minikube/key.pem
	I1003 19:24:05.456987  448124 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21625-284583/.minikube/key.pem (1675 bytes)
	I1003 19:24:05.457054  448124 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21625-284583/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-855981 san=[127.0.0.1 192.168.76.2 force-systemd-flag-855981 localhost minikube]
	I1003 19:24:06.007894  448124 provision.go:177] copyRemoteCerts
	I1003 19:24:06.007990  448124 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 19:24:06.008037  448124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-855981
	I1003 19:24:06.028156  448124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33398 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/force-systemd-flag-855981/id_rsa Username:docker}
	I1003 19:24:06.125204  448124 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1003 19:24:06.125336  448124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 19:24:06.145713  448124 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-284583/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1003 19:24:06.145819  448124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1003 19:24:06.163754  448124 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-284583/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1003 19:24:06.163818  448124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1003 19:24:06.181640  448124 provision.go:87] duration metric: took 744.190736ms to configureAuth
	I1003 19:24:06.181667  448124 ubuntu.go:206] setting minikube options for container-runtime
	I1003 19:24:06.181854  448124 config.go:182] Loaded profile config "force-systemd-flag-855981": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 19:24:06.181954  448124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-855981
	I1003 19:24:06.200981  448124 main.go:141] libmachine: Using SSH client type: native
	I1003 19:24:06.201291  448124 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33398 <nil> <nil>}
	I1003 19:24:06.201305  448124 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1003 19:24:06.447765  448124 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1003 19:24:06.447787  448124 machine.go:96] duration metric: took 4.517264592s to provisionDockerMachine
	I1003 19:24:06.447805  448124 client.go:171] duration metric: took 11.104580351s to LocalClient.Create
	I1003 19:24:06.447817  448124 start.go:167] duration metric: took 11.104657219s to libmachine.API.Create "force-systemd-flag-855981"
	I1003 19:24:06.447825  448124 start.go:293] postStartSetup for "force-systemd-flag-855981" (driver="docker")
	I1003 19:24:06.447834  448124 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 19:24:06.447905  448124 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 19:24:06.447957  448124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-855981
	I1003 19:24:06.466449  448124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33398 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/force-systemd-flag-855981/id_rsa Username:docker}
	I1003 19:24:06.564957  448124 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 19:24:06.568309  448124 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1003 19:24:06.568337  448124 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1003 19:24:06.568348  448124 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-284583/.minikube/addons for local assets ...
	I1003 19:24:06.568405  448124 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-284583/.minikube/files for local assets ...
	I1003 19:24:06.568494  448124 filesync.go:149] local asset: /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem -> 2864342.pem in /etc/ssl/certs
	I1003 19:24:06.568505  448124 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem -> /etc/ssl/certs/2864342.pem
	I1003 19:24:06.568617  448124 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 19:24:06.576380  448124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem --> /etc/ssl/certs/2864342.pem (1708 bytes)
	I1003 19:24:06.594647  448124 start.go:296] duration metric: took 146.807944ms for postStartSetup
	I1003 19:24:06.595032  448124 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-855981
	I1003 19:24:06.614489  448124 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-flag-855981/config.json ...
	I1003 19:24:06.614775  448124 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 19:24:06.614832  448124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-855981
	I1003 19:24:06.631891  448124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33398 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/force-systemd-flag-855981/id_rsa Username:docker}
	I1003 19:24:06.726127  448124 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1003 19:24:06.730896  448124 start.go:128] duration metric: took 11.391507674s to createHost
	I1003 19:24:06.730920  448124 start.go:83] releasing machines lock for "force-systemd-flag-855981", held for 11.391634273s
	I1003 19:24:06.730997  448124 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-855981
	I1003 19:24:06.747793  448124 ssh_runner.go:195] Run: cat /version.json
	I1003 19:24:06.747856  448124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-855981
	I1003 19:24:06.747912  448124 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 19:24:06.747993  448124 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-855981
	I1003 19:24:06.771708  448124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33398 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/force-systemd-flag-855981/id_rsa Username:docker}
	I1003 19:24:06.772334  448124 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33398 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/force-systemd-flag-855981/id_rsa Username:docker}
	I1003 19:24:06.961749  448124 ssh_runner.go:195] Run: systemctl --version
	I1003 19:24:06.968512  448124 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1003 19:24:07.008339  448124 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1003 19:24:07.013150  448124 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 19:24:07.013220  448124 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 19:24:07.041515  448124 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1003 19:24:07.041548  448124 start.go:495] detecting cgroup driver to use...
	I1003 19:24:07.041561  448124 start.go:499] using "systemd" cgroup driver as enforced via flags
	I1003 19:24:07.041655  448124 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 19:24:07.059703  448124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 19:24:07.072517  448124 docker.go:218] disabling cri-docker service (if available) ...
	I1003 19:24:07.072627  448124 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1003 19:24:07.090792  448124 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1003 19:24:07.109488  448124 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1003 19:24:07.221680  448124 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1003 19:24:07.347650  448124 docker.go:234] disabling docker service ...
	I1003 19:24:07.347725  448124 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1003 19:24:07.370962  448124 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1003 19:24:07.384423  448124 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1003 19:24:07.502698  448124 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1003 19:24:07.627233  448124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1003 19:24:07.643013  448124 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 19:24:07.657790  448124 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1003 19:24:07.657860  448124 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:24:07.667309  448124 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1003 19:24:07.667378  448124 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:24:07.678734  448124 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:24:07.687992  448124 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:24:07.697204  448124 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 19:24:07.705517  448124 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:24:07.714857  448124 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:24:07.729407  448124 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:24:07.738515  448124 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 19:24:07.746085  448124 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 19:24:07.753463  448124 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 19:24:07.874388  448124 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1003 19:24:08.010545  448124 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1003 19:24:08.010626  448124 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1003 19:24:08.015526  448124 start.go:563] Will wait 60s for crictl version
	I1003 19:24:08.015617  448124 ssh_runner.go:195] Run: which crictl
	I1003 19:24:08.019887  448124 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1003 19:24:08.046019  448124 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1003 19:24:08.046118  448124 ssh_runner.go:195] Run: crio --version
	I1003 19:24:08.075317  448124 ssh_runner.go:195] Run: crio --version
	I1003 19:24:08.114287  448124 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1003 19:24:08.117103  448124 cli_runner.go:164] Run: docker network inspect force-systemd-flag-855981 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 19:24:08.133580  448124 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1003 19:24:08.137583  448124 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 19:24:08.147372  448124 kubeadm.go:883] updating cluster {Name:force-systemd-flag-855981 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-855981 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1003 19:24:08.147479  448124 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 19:24:08.147537  448124 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 19:24:08.181732  448124 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 19:24:08.181759  448124 crio.go:433] Images already preloaded, skipping extraction
	I1003 19:24:08.181830  448124 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 19:24:08.212232  448124 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 19:24:08.212253  448124 cache_images.go:85] Images are preloaded, skipping loading
	I1003 19:24:08.212262  448124 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1003 19:24:08.212381  448124 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=force-systemd-flag-855981 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-855981 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1003 19:24:08.212476  448124 ssh_runner.go:195] Run: crio config
	I1003 19:24:08.274727  448124 cni.go:84] Creating CNI manager for ""
	I1003 19:24:08.274748  448124 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 19:24:08.274769  448124 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1003 19:24:08.274796  448124 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-855981 NodeName:force-systemd-flag-855981 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1003 19:24:08.274969  448124 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-flag-855981"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 19:24:08.275056  448124 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1003 19:24:08.282901  448124 binaries.go:44] Found k8s binaries, skipping transfer
	I1003 19:24:08.282973  448124 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1003 19:24:08.290817  448124 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1003 19:24:08.304180  448124 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 19:24:08.317214  448124 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1003 19:24:08.331763  448124 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1003 19:24:08.335438  448124 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 19:24:08.345298  448124 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 19:24:08.461072  448124 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 19:24:08.476715  448124 certs.go:69] Setting up /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-flag-855981 for IP: 192.168.76.2
	I1003 19:24:08.476825  448124 certs.go:195] generating shared ca certs ...
	I1003 19:24:08.476856  448124 certs.go:227] acquiring lock for ca certs: {Name:mk5a10e6c921326e9c211447576eaeb893259ba7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:24:08.477012  448124 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21625-284583/.minikube/ca.key
	I1003 19:24:08.477094  448124 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.key
	I1003 19:24:08.477128  448124 certs.go:257] generating profile certs ...
	I1003 19:24:08.477207  448124 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-flag-855981/client.key
	I1003 19:24:08.477243  448124 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-flag-855981/client.crt with IP's: []
	I1003 19:24:08.692622  448124 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-flag-855981/client.crt ...
	I1003 19:24:08.692656  448124 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-flag-855981/client.crt: {Name:mkd1f1aaab736711da5e9df664289581cdf3856e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:24:08.692899  448124 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-flag-855981/client.key ...
	I1003 19:24:08.692926  448124 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-flag-855981/client.key: {Name:mkc23aec1846632d163d01163d74fa57ff3f55d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:24:08.693084  448124 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-flag-855981/apiserver.key.5e0eeb1a
	I1003 19:24:08.693123  448124 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-flag-855981/apiserver.crt.5e0eeb1a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1003 19:24:09.623987  448124 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-flag-855981/apiserver.crt.5e0eeb1a ...
	I1003 19:24:09.624029  448124 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-flag-855981/apiserver.crt.5e0eeb1a: {Name:mk32532715af940ad80f36bcff4d1cbb3e124413 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:24:09.624229  448124 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-flag-855981/apiserver.key.5e0eeb1a ...
	I1003 19:24:09.624244  448124 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-flag-855981/apiserver.key.5e0eeb1a: {Name:mkcedc65fc15c6042fcd5f6fbe32b53f08c70208 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:24:09.624335  448124 certs.go:382] copying /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-flag-855981/apiserver.crt.5e0eeb1a -> /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-flag-855981/apiserver.crt
	I1003 19:24:09.624416  448124 certs.go:386] copying /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-flag-855981/apiserver.key.5e0eeb1a -> /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-flag-855981/apiserver.key
	I1003 19:24:09.624485  448124 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-flag-855981/proxy-client.key
	I1003 19:24:09.624503  448124 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-flag-855981/proxy-client.crt with IP's: []
	I1003 19:24:10.162654  448124 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-flag-855981/proxy-client.crt ...
	I1003 19:24:10.162696  448124 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-flag-855981/proxy-client.crt: {Name:mke813d0c4cbcc77018958abb8135d3941c519a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:24:10.162928  448124 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-flag-855981/proxy-client.key ...
	I1003 19:24:10.162944  448124 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-flag-855981/proxy-client.key: {Name:mk31e6fb17aebb3ae34d3a2163e1820ab6a94361 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:24:10.163043  448124 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-284583/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1003 19:24:10.163065  448124 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-284583/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1003 19:24:10.163077  448124 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1003 19:24:10.163098  448124 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1003 19:24:10.163110  448124 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-flag-855981/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1003 19:24:10.163127  448124 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-flag-855981/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1003 19:24:10.163139  448124 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-flag-855981/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1003 19:24:10.163149  448124 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-flag-855981/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1003 19:24:10.163199  448124 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/286434.pem (1338 bytes)
	W1003 19:24:10.163238  448124 certs.go:480] ignoring /home/jenkins/minikube-integration/21625-284583/.minikube/certs/286434_empty.pem, impossibly tiny 0 bytes
	I1003 19:24:10.163247  448124 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 19:24:10.163270  448124 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem (1082 bytes)
	I1003 19:24:10.163298  448124 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem (1123 bytes)
	I1003 19:24:10.163326  448124 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/key.pem (1675 bytes)
	I1003 19:24:10.163371  448124 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem (1708 bytes)
	I1003 19:24:10.163407  448124 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/286434.pem -> /usr/share/ca-certificates/286434.pem
	I1003 19:24:10.163425  448124 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem -> /usr/share/ca-certificates/2864342.pem
	I1003 19:24:10.163438  448124 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-284583/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1003 19:24:10.164001  448124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 19:24:10.194417  448124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1003 19:24:10.214102  448124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 19:24:10.237644  448124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1003 19:24:10.259415  448124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-flag-855981/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1003 19:24:10.277841  448124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-flag-855981/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1003 19:24:10.296837  448124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-flag-855981/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 19:24:10.315359  448124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-flag-855981/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1003 19:24:10.334827  448124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/certs/286434.pem --> /usr/share/ca-certificates/286434.pem (1338 bytes)
	I1003 19:24:10.354468  448124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem --> /usr/share/ca-certificates/2864342.pem (1708 bytes)
	I1003 19:24:10.373161  448124 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 19:24:10.392261  448124 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 19:24:10.405487  448124 ssh_runner.go:195] Run: openssl version
	I1003 19:24:10.411939  448124 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/286434.pem && ln -fs /usr/share/ca-certificates/286434.pem /etc/ssl/certs/286434.pem"
	I1003 19:24:10.420417  448124 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/286434.pem
	I1003 19:24:10.424423  448124 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  3 18:34 /usr/share/ca-certificates/286434.pem
	I1003 19:24:10.424527  448124 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/286434.pem
	I1003 19:24:10.466215  448124 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/286434.pem /etc/ssl/certs/51391683.0"
	I1003 19:24:10.474746  448124 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2864342.pem && ln -fs /usr/share/ca-certificates/2864342.pem /etc/ssl/certs/2864342.pem"
	I1003 19:24:10.483371  448124 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2864342.pem
	I1003 19:24:10.487522  448124 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  3 18:34 /usr/share/ca-certificates/2864342.pem
	I1003 19:24:10.487593  448124 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2864342.pem
	I1003 19:24:10.530623  448124 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2864342.pem /etc/ssl/certs/3ec20f2e.0"
	I1003 19:24:10.539349  448124 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 19:24:10.548218  448124 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 19:24:10.552163  448124 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  3 18:27 /usr/share/ca-certificates/minikubeCA.pem
	I1003 19:24:10.552229  448124 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 19:24:10.595939  448124 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 19:24:10.605108  448124 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 19:24:10.608802  448124 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1003 19:24:10.608868  448124 kubeadm.go:400] StartCluster: {Name:force-systemd-flag-855981 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-855981 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 19:24:10.608946  448124 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1003 19:24:10.609020  448124 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1003 19:24:10.636684  448124 cri.go:89] found id: ""
	I1003 19:24:10.636792  448124 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 19:24:10.645038  448124 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1003 19:24:10.653310  448124 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1003 19:24:10.653436  448124 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 19:24:10.662860  448124 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1003 19:24:10.662881  448124 kubeadm.go:157] found existing configuration files:
	
	I1003 19:24:10.662932  448124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1003 19:24:10.670911  448124 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1003 19:24:10.670975  448124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1003 19:24:10.678388  448124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1003 19:24:10.686730  448124 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1003 19:24:10.686831  448124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1003 19:24:10.694638  448124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1003 19:24:10.702591  448124 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1003 19:24:10.702695  448124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1003 19:24:10.709817  448124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1003 19:24:10.717667  448124 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1003 19:24:10.717776  448124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1003 19:24:10.725193  448124 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1003 19:24:10.772768  448124 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1003 19:24:10.773016  448124 kubeadm.go:318] [preflight] Running pre-flight checks
	I1003 19:24:10.811431  448124 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1003 19:24:10.811595  448124 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1003 19:24:10.811663  448124 kubeadm.go:318] OS: Linux
	I1003 19:24:10.811743  448124 kubeadm.go:318] CGROUPS_CPU: enabled
	I1003 19:24:10.811823  448124 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1003 19:24:10.811903  448124 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1003 19:24:10.812029  448124 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1003 19:24:10.812107  448124 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1003 19:24:10.812192  448124 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1003 19:24:10.812265  448124 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1003 19:24:10.812344  448124 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1003 19:24:10.812419  448124 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1003 19:24:10.887397  448124 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1003 19:24:10.887575  448124 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1003 19:24:10.887703  448124 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1003 19:24:10.894957  448124 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1003 19:24:10.900836  448124 out.go:252]   - Generating certificates and keys ...
	I1003 19:24:10.901005  448124 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1003 19:24:10.901146  448124 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1003 19:24:11.712087  448124 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1003 19:24:12.192829  448124 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1003 19:24:12.531934  448124 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1003 19:24:13.321743  448124 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1003 19:24:15.549937  448124 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1003 19:24:15.550308  448124 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-855981 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1003 19:24:16.209021  448124 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1003 19:24:16.209425  448124 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-855981 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1003 19:24:16.450669  448124 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1003 19:24:16.888564  448124 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1003 19:24:17.473577  448124 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1003 19:24:17.473857  448124 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1003 19:24:17.941209  448124 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1003 19:24:18.136046  448124 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1003 19:24:18.390966  448124 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1003 19:24:18.741234  448124 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1003 19:24:18.964526  448124 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1003 19:24:18.965124  448124 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1003 19:24:18.967783  448124 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1003 19:24:18.972443  448124 out.go:252]   - Booting up control plane ...
	I1003 19:24:18.972545  448124 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1003 19:24:18.972626  448124 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1003 19:24:18.972695  448124 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1003 19:24:18.987398  448124 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1003 19:24:18.987803  448124 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1003 19:24:18.994764  448124 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1003 19:24:18.995078  448124 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1003 19:24:18.995127  448124 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1003 19:24:19.116227  448124 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1003 19:24:19.116358  448124 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1003 19:24:21.120205  448124 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 2.001131159s
	I1003 19:24:21.120317  448124 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1003 19:24:21.120403  448124 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1003 19:24:21.120495  448124 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1003 19:24:21.120577  448124 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1003 19:28:21.119976  448124 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.00026643s
	I1003 19:28:21.120302  448124 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000266513s
	I1003 19:28:21.120392  448124 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000702943s
	I1003 19:28:21.120405  448124 kubeadm.go:318] 
	I1003 19:28:21.120497  448124 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1003 19:28:21.120583  448124 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1003 19:28:21.120682  448124 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1003 19:28:21.120799  448124 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1003 19:28:21.120879  448124 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1003 19:28:21.120961  448124 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1003 19:28:21.120970  448124 kubeadm.go:318] 
	I1003 19:28:21.124662  448124 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1003 19:28:21.124924  448124 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1003 19:28:21.125044  448124 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1003 19:28:21.125618  448124 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1003 19:28:21.125698  448124 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1003 19:28:21.125825  448124 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-855981 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-855981 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 2.001131159s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.00026643s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000266513s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000702943s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-855981 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-855981 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 2.001131159s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.00026643s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000266513s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000702943s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1003 19:28:21.125906  448124 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1003 19:28:21.672371  448124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 19:28:21.685248  448124 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1003 19:28:21.685314  448124 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 19:28:21.693443  448124 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1003 19:28:21.693463  448124 kubeadm.go:157] found existing configuration files:
	
	I1003 19:28:21.693514  448124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1003 19:28:21.701653  448124 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1003 19:28:21.701719  448124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1003 19:28:21.709339  448124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1003 19:28:21.717339  448124 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1003 19:28:21.717405  448124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1003 19:28:21.725047  448124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1003 19:28:21.732831  448124 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1003 19:28:21.732904  448124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1003 19:28:21.740645  448124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1003 19:28:21.748531  448124 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1003 19:28:21.748594  448124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1003 19:28:21.755896  448124 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1003 19:28:21.796277  448124 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1003 19:28:21.796579  448124 kubeadm.go:318] [preflight] Running pre-flight checks
	I1003 19:28:21.821115  448124 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1003 19:28:21.821188  448124 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1003 19:28:21.821225  448124 kubeadm.go:318] OS: Linux
	I1003 19:28:21.821274  448124 kubeadm.go:318] CGROUPS_CPU: enabled
	I1003 19:28:21.821325  448124 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1003 19:28:21.821375  448124 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1003 19:28:21.821425  448124 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1003 19:28:21.821476  448124 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1003 19:28:21.821531  448124 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1003 19:28:21.821579  448124 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1003 19:28:21.821629  448124 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1003 19:28:21.821678  448124 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1003 19:28:21.893400  448124 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1003 19:28:21.893547  448124 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1003 19:28:21.893694  448124 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1003 19:28:21.905164  448124 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1003 19:28:21.911977  448124 out.go:252]   - Generating certificates and keys ...
	I1003 19:28:21.912080  448124 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1003 19:28:21.912158  448124 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1003 19:28:21.912263  448124 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1003 19:28:21.912329  448124 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1003 19:28:21.912405  448124 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1003 19:28:21.912468  448124 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1003 19:28:21.912541  448124 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1003 19:28:21.912610  448124 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1003 19:28:21.912694  448124 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1003 19:28:21.912786  448124 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1003 19:28:21.912832  448124 kubeadm.go:318] [certs] Using the existing "sa" key
	I1003 19:28:21.912900  448124 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1003 19:28:22.346766  448124 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1003 19:28:22.738855  448124 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1003 19:28:23.202029  448124 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1003 19:28:24.078345  448124 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1003 19:28:24.225561  448124 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1003 19:28:24.226320  448124 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1003 19:28:24.228968  448124 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1003 19:28:24.233210  448124 out.go:252]   - Booting up control plane ...
	I1003 19:28:24.233328  448124 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1003 19:28:24.233422  448124 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1003 19:28:24.235948  448124 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1003 19:28:24.251647  448124 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1003 19:28:24.251950  448124 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1003 19:28:24.260445  448124 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1003 19:28:24.260799  448124 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1003 19:28:24.260991  448124 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1003 19:28:24.405021  448124 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1003 19:28:24.405150  448124 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1003 19:28:27.405369  448124 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 3.001387961s
	I1003 19:28:27.413151  448124 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1003 19:28:27.413255  448124 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1003 19:28:27.413350  448124 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1003 19:28:27.413446  448124 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1003 19:32:27.413964  448124 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000708724s
	I1003 19:32:27.414101  448124 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000529448s
	I1003 19:32:27.414633  448124 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000761263s
	I1003 19:32:27.414655  448124 kubeadm.go:318] 
	I1003 19:32:27.414751  448124 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1003 19:32:27.414838  448124 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1003 19:32:27.414935  448124 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1003 19:32:27.415036  448124 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1003 19:32:27.415124  448124 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1003 19:32:27.415214  448124 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1003 19:32:27.415226  448124 kubeadm.go:318] 
	I1003 19:32:27.419315  448124 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1003 19:32:27.419558  448124 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1003 19:32:27.419673  448124 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1003 19:32:27.420237  448124 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1003 19:32:27.420346  448124 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1003 19:32:27.420405  448124 kubeadm.go:402] duration metric: took 8m16.811540578s to StartCluster
	I1003 19:32:27.420447  448124 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 19:32:27.420511  448124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 19:32:27.445962  448124 cri.go:89] found id: ""
	I1003 19:32:27.446000  448124 logs.go:282] 0 containers: []
	W1003 19:32:27.446010  448124 logs.go:284] No container was found matching "kube-apiserver"
	I1003 19:32:27.446018  448124 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 19:32:27.446077  448124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 19:32:27.478379  448124 cri.go:89] found id: ""
	I1003 19:32:27.478406  448124 logs.go:282] 0 containers: []
	W1003 19:32:27.478415  448124 logs.go:284] No container was found matching "etcd"
	I1003 19:32:27.478421  448124 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 19:32:27.478482  448124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 19:32:27.509179  448124 cri.go:89] found id: ""
	I1003 19:32:27.509205  448124 logs.go:282] 0 containers: []
	W1003 19:32:27.509214  448124 logs.go:284] No container was found matching "coredns"
	I1003 19:32:27.509221  448124 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 19:32:27.509278  448124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 19:32:27.534898  448124 cri.go:89] found id: ""
	I1003 19:32:27.534923  448124 logs.go:282] 0 containers: []
	W1003 19:32:27.534933  448124 logs.go:284] No container was found matching "kube-scheduler"
	I1003 19:32:27.534940  448124 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 19:32:27.535031  448124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 19:32:27.561607  448124 cri.go:89] found id: ""
	I1003 19:32:27.561633  448124 logs.go:282] 0 containers: []
	W1003 19:32:27.561642  448124 logs.go:284] No container was found matching "kube-proxy"
	I1003 19:32:27.561649  448124 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 19:32:27.561712  448124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 19:32:27.587705  448124 cri.go:89] found id: ""
	I1003 19:32:27.587735  448124 logs.go:282] 0 containers: []
	W1003 19:32:27.587744  448124 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 19:32:27.587752  448124 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 19:32:27.587811  448124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 19:32:27.614585  448124 cri.go:89] found id: ""
	I1003 19:32:27.614613  448124 logs.go:282] 0 containers: []
	W1003 19:32:27.614622  448124 logs.go:284] No container was found matching "kindnet"
	I1003 19:32:27.614632  448124 logs.go:123] Gathering logs for kubelet ...
	I1003 19:32:27.614643  448124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 19:32:27.703594  448124 logs.go:123] Gathering logs for dmesg ...
	I1003 19:32:27.703630  448124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 19:32:27.719856  448124 logs.go:123] Gathering logs for describe nodes ...
	I1003 19:32:27.719887  448124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 19:32:27.790531  448124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 19:32:27.782685    2343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 19:32:27.783378    2343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 19:32:27.784615    2343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 19:32:27.785128    2343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 19:32:27.786586    2343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 19:32:27.782685    2343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 19:32:27.783378    2343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 19:32:27.784615    2343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 19:32:27.785128    2343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 19:32:27.786586    2343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 19:32:27.790554  448124 logs.go:123] Gathering logs for CRI-O ...
	I1003 19:32:27.790568  448124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 19:32:27.869907  448124 logs.go:123] Gathering logs for container status ...
	I1003 19:32:27.869944  448124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1003 19:32:27.899735  448124 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 3.001387961s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000708724s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000529448s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000761263s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1003 19:32:27.899791  448124 out.go:285] * 
	* 
	W1003 19:32:27.899845  448124 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 3.001387961s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000708724s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000529448s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000761263s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 3.001387961s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000708724s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000529448s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000761263s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1003 19:32:27.899865  448124 out.go:285] * 
	* 
	W1003 19:32:27.902016  448124 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 19:32:27.909498  448124 out.go:203] 
	W1003 19:32:27.913285  448124 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 3.001387961s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000708724s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000529448s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000761263s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 3.001387961s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000708724s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000529448s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000761263s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1003 19:32:27.913318  448124 out.go:285] * 
	* 
	I1003 19:32:27.916409  448124 out.go:203] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-linux-arm64 start -p force-systemd-flag-855981 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio" : exit status 80
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-855981 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2025-10-03 19:32:28.316224351 +0000 UTC m=+3949.581302181
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestForceSystemdFlag]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestForceSystemdFlag]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect force-systemd-flag-855981
helpers_test.go:243: (dbg) docker inspect force-systemd-flag-855981:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "baf20cc66e0976c0854e4261d4a6a823e5bfbe8e0fa2e21cffaabd8cf6475e8d",
	        "Created": "2025-10-03T19:24:00.754967041Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 448541,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-03T19:24:00.826333132Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/baf20cc66e0976c0854e4261d4a6a823e5bfbe8e0fa2e21cffaabd8cf6475e8d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/baf20cc66e0976c0854e4261d4a6a823e5bfbe8e0fa2e21cffaabd8cf6475e8d/hostname",
	        "HostsPath": "/var/lib/docker/containers/baf20cc66e0976c0854e4261d4a6a823e5bfbe8e0fa2e21cffaabd8cf6475e8d/hosts",
	        "LogPath": "/var/lib/docker/containers/baf20cc66e0976c0854e4261d4a6a823e5bfbe8e0fa2e21cffaabd8cf6475e8d/baf20cc66e0976c0854e4261d4a6a823e5bfbe8e0fa2e21cffaabd8cf6475e8d-json.log",
	        "Name": "/force-systemd-flag-855981",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "force-systemd-flag-855981:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "force-systemd-flag-855981",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "baf20cc66e0976c0854e4261d4a6a823e5bfbe8e0fa2e21cffaabd8cf6475e8d",
	                "LowerDir": "/var/lib/docker/overlay2/459cb549db36f650c3b2d3d928d42d552a1714b5e18aab05a79cfac627ebc763-init/diff:/var/lib/docker/overlay2/87b205803817b0b71a214d995ab7e10a92033bbf72d76d6e052f1d21ccecb313/diff",
	                "MergedDir": "/var/lib/docker/overlay2/459cb549db36f650c3b2d3d928d42d552a1714b5e18aab05a79cfac627ebc763/merged",
	                "UpperDir": "/var/lib/docker/overlay2/459cb549db36f650c3b2d3d928d42d552a1714b5e18aab05a79cfac627ebc763/diff",
	                "WorkDir": "/var/lib/docker/overlay2/459cb549db36f650c3b2d3d928d42d552a1714b5e18aab05a79cfac627ebc763/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "force-systemd-flag-855981",
	                "Source": "/var/lib/docker/volumes/force-systemd-flag-855981/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "force-systemd-flag-855981",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "force-systemd-flag-855981",
	                "name.minikube.sigs.k8s.io": "force-systemd-flag-855981",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7f13e5d14b4f012dbfa980ac5150ad4231e3b3ab8d05dddbeea65764922b3455",
	            "SandboxKey": "/var/run/docker/netns/7f13e5d14b4f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33398"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33399"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33402"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33400"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33401"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "force-systemd-flag-855981": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ba:e5:a8:e1:8b:c3",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9bb6abe107e23ad5ec2dd2dcf11b4946d83d4dd676362f6f555d88b278ee9491",
	                    "EndpointID": "724f7e4531b88a890df06ba135f3e47e0246959794aa0de8651adada226f636c",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "force-systemd-flag-855981",
	                        "baf20cc66e09"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-flag-855981 -n force-systemd-flag-855981
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-flag-855981 -n force-systemd-flag-855981: exit status 6 (310.014917ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1003 19:32:28.630347  458389 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-855981" does not appear in /home/jenkins/minikube-integration/21625-284583/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestForceSystemdFlag FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestForceSystemdFlag]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-855981 logs -n 25
helpers_test.go:260: TestForceSystemdFlag logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                    ARGS                                                    │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-388132 sudo systemctl cat kubelet --no-pager                                                     │ cilium-388132             │ jenkins │ v1.37.0 │ 03 Oct 25 19:25 UTC │                     │
	│ ssh     │ -p cilium-388132 sudo journalctl -xeu kubelet --all --full --no-pager                                      │ cilium-388132             │ jenkins │ v1.37.0 │ 03 Oct 25 19:25 UTC │                     │
	│ ssh     │ -p cilium-388132 sudo cat /etc/kubernetes/kubelet.conf                                                     │ cilium-388132             │ jenkins │ v1.37.0 │ 03 Oct 25 19:25 UTC │                     │
	│ ssh     │ -p cilium-388132 sudo cat /var/lib/kubelet/config.yaml                                                     │ cilium-388132             │ jenkins │ v1.37.0 │ 03 Oct 25 19:25 UTC │                     │
	│ ssh     │ -p cilium-388132 sudo systemctl status docker --all --full --no-pager                                      │ cilium-388132             │ jenkins │ v1.37.0 │ 03 Oct 25 19:25 UTC │                     │
	│ ssh     │ -p cilium-388132 sudo systemctl cat docker --no-pager                                                      │ cilium-388132             │ jenkins │ v1.37.0 │ 03 Oct 25 19:25 UTC │                     │
	│ ssh     │ -p cilium-388132 sudo cat /etc/docker/daemon.json                                                          │ cilium-388132             │ jenkins │ v1.37.0 │ 03 Oct 25 19:25 UTC │                     │
	│ ssh     │ -p cilium-388132 sudo docker system info                                                                   │ cilium-388132             │ jenkins │ v1.37.0 │ 03 Oct 25 19:25 UTC │                     │
	│ ssh     │ -p cilium-388132 sudo systemctl status cri-docker --all --full --no-pager                                  │ cilium-388132             │ jenkins │ v1.37.0 │ 03 Oct 25 19:25 UTC │                     │
	│ ssh     │ -p cilium-388132 sudo systemctl cat cri-docker --no-pager                                                  │ cilium-388132             │ jenkins │ v1.37.0 │ 03 Oct 25 19:25 UTC │                     │
	│ ssh     │ -p cilium-388132 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                             │ cilium-388132             │ jenkins │ v1.37.0 │ 03 Oct 25 19:25 UTC │                     │
	│ ssh     │ -p cilium-388132 sudo cat /usr/lib/systemd/system/cri-docker.service                                       │ cilium-388132             │ jenkins │ v1.37.0 │ 03 Oct 25 19:25 UTC │                     │
	│ ssh     │ -p cilium-388132 sudo cri-dockerd --version                                                                │ cilium-388132             │ jenkins │ v1.37.0 │ 03 Oct 25 19:25 UTC │                     │
	│ ssh     │ -p cilium-388132 sudo systemctl status containerd --all --full --no-pager                                  │ cilium-388132             │ jenkins │ v1.37.0 │ 03 Oct 25 19:25 UTC │                     │
	│ ssh     │ -p cilium-388132 sudo systemctl cat containerd --no-pager                                                  │ cilium-388132             │ jenkins │ v1.37.0 │ 03 Oct 25 19:25 UTC │                     │
	│ ssh     │ -p cilium-388132 sudo cat /lib/systemd/system/containerd.service                                           │ cilium-388132             │ jenkins │ v1.37.0 │ 03 Oct 25 19:25 UTC │                     │
	│ ssh     │ -p cilium-388132 sudo cat /etc/containerd/config.toml                                                      │ cilium-388132             │ jenkins │ v1.37.0 │ 03 Oct 25 19:25 UTC │                     │
	│ ssh     │ -p cilium-388132 sudo containerd config dump                                                               │ cilium-388132             │ jenkins │ v1.37.0 │ 03 Oct 25 19:25 UTC │                     │
	│ ssh     │ -p cilium-388132 sudo systemctl status crio --all --full --no-pager                                        │ cilium-388132             │ jenkins │ v1.37.0 │ 03 Oct 25 19:25 UTC │                     │
	│ ssh     │ -p cilium-388132 sudo systemctl cat crio --no-pager                                                        │ cilium-388132             │ jenkins │ v1.37.0 │ 03 Oct 25 19:25 UTC │                     │
	│ ssh     │ -p cilium-388132 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                              │ cilium-388132             │ jenkins │ v1.37.0 │ 03 Oct 25 19:25 UTC │                     │
	│ ssh     │ -p cilium-388132 sudo crio config                                                                          │ cilium-388132             │ jenkins │ v1.37.0 │ 03 Oct 25 19:25 UTC │                     │
	│ delete  │ -p cilium-388132                                                                                           │ cilium-388132             │ jenkins │ v1.37.0 │ 03 Oct 25 19:25 UTC │ 03 Oct 25 19:25 UTC │
	│ start   │ -p force-systemd-env-159095 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-env-159095  │ jenkins │ v1.37.0 │ 03 Oct 25 19:25 UTC │                     │
	│ ssh     │ force-systemd-flag-855981 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                       │ force-systemd-flag-855981 │ jenkins │ v1.37.0 │ 03 Oct 25 19:32 UTC │ 03 Oct 25 19:32 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/03 19:25:48
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 19:25:48.814935  454580 out.go:360] Setting OutFile to fd 1 ...
	I1003 19:25:48.815053  454580 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 19:25:48.815063  454580 out.go:374] Setting ErrFile to fd 2...
	I1003 19:25:48.815069  454580 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 19:25:48.815316  454580 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-284583/.minikube/bin
	I1003 19:25:48.815712  454580 out.go:368] Setting JSON to false
	I1003 19:25:48.816576  454580 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7700,"bootTime":1759511849,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1003 19:25:48.816644  454580 start.go:140] virtualization:  
	I1003 19:25:48.821913  454580 out.go:179] * [force-systemd-env-159095] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1003 19:25:48.824939  454580 out.go:179]   - MINIKUBE_LOCATION=21625
	I1003 19:25:48.825013  454580 notify.go:220] Checking for updates...
	I1003 19:25:48.830835  454580 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 19:25:48.833636  454580 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21625-284583/kubeconfig
	I1003 19:25:48.836453  454580 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-284583/.minikube
	I1003 19:25:48.839182  454580 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1003 19:25:48.842048  454580 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=true
	I1003 19:25:48.845415  454580 config.go:182] Loaded profile config "force-systemd-flag-855981": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 19:25:48.845523  454580 driver.go:421] Setting default libvirt URI to qemu:///system
	I1003 19:25:48.879322  454580 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1003 19:25:48.879461  454580 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 19:25:48.946789  454580 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-03 19:25:48.937306176 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1003 19:25:48.946902  454580 docker.go:318] overlay module found
	I1003 19:25:48.950002  454580 out.go:179] * Using the docker driver based on user configuration
	I1003 19:25:48.952919  454580 start.go:304] selected driver: docker
	I1003 19:25:48.952941  454580 start.go:924] validating driver "docker" against <nil>
	I1003 19:25:48.952956  454580 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 19:25:48.953668  454580 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 19:25:49.013568  454580 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-03 19:25:49.00374877 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1003 19:25:49.013736  454580 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1003 19:25:49.013961  454580 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1003 19:25:49.016864  454580 out.go:179] * Using Docker driver with root privileges
	I1003 19:25:49.019673  454580 cni.go:84] Creating CNI manager for ""
	I1003 19:25:49.019747  454580 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 19:25:49.019760  454580 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1003 19:25:49.019838  454580 start.go:348] cluster config:
	{Name:force-systemd-env-159095 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-159095 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 19:25:49.022979  454580 out.go:179] * Starting "force-systemd-env-159095" primary control-plane node in "force-systemd-env-159095" cluster
	I1003 19:25:49.025847  454580 cache.go:123] Beginning downloading kic base image for docker with crio
	I1003 19:25:49.028784  454580 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1003 19:25:49.031644  454580 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 19:25:49.031704  454580 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21625-284583/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1003 19:25:49.031718  454580 cache.go:58] Caching tarball of preloaded images
	I1003 19:25:49.031730  454580 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1003 19:25:49.031803  454580 preload.go:233] Found /home/jenkins/minikube-integration/21625-284583/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1003 19:25:49.031813  454580 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1003 19:25:49.031925  454580 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-env-159095/config.json ...
	I1003 19:25:49.031942  454580 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-env-159095/config.json: {Name:mkccfe0252f86bc3641a86c319cb32a0e2dd05e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:25:49.050276  454580 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1003 19:25:49.050303  454580 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1003 19:25:49.050326  454580 cache.go:232] Successfully downloaded all kic artifacts
	I1003 19:25:49.050350  454580 start.go:360] acquireMachinesLock for force-systemd-env-159095: {Name:mk3d73d31c60e2c8140d6014661a31ecf05d19cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 19:25:49.050460  454580 start.go:364] duration metric: took 89.626µs to acquireMachinesLock for "force-systemd-env-159095"
	I1003 19:25:49.050492  454580 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-159095 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-159095 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1003 19:25:49.050559  454580 start.go:125] createHost starting for "" (driver="docker")
	I1003 19:25:49.054048  454580 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1003 19:25:49.054261  454580 start.go:159] libmachine.API.Create for "force-systemd-env-159095" (driver="docker")
	I1003 19:25:49.054309  454580 client.go:168] LocalClient.Create starting
	I1003 19:25:49.054379  454580 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem
	I1003 19:25:49.054420  454580 main.go:141] libmachine: Decoding PEM data...
	I1003 19:25:49.054446  454580 main.go:141] libmachine: Parsing certificate...
	I1003 19:25:49.054506  454580 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem
	I1003 19:25:49.054528  454580 main.go:141] libmachine: Decoding PEM data...
	I1003 19:25:49.054542  454580 main.go:141] libmachine: Parsing certificate...
	I1003 19:25:49.054914  454580 cli_runner.go:164] Run: docker network inspect force-systemd-env-159095 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1003 19:25:49.070363  454580 cli_runner.go:211] docker network inspect force-systemd-env-159095 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1003 19:25:49.070449  454580 network_create.go:284] running [docker network inspect force-systemd-env-159095] to gather additional debugging logs...
	I1003 19:25:49.070465  454580 cli_runner.go:164] Run: docker network inspect force-systemd-env-159095
	W1003 19:25:49.085002  454580 cli_runner.go:211] docker network inspect force-systemd-env-159095 returned with exit code 1
	I1003 19:25:49.085035  454580 network_create.go:287] error running [docker network inspect force-systemd-env-159095]: docker network inspect force-systemd-env-159095: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-159095 not found
	I1003 19:25:49.085049  454580 network_create.go:289] output of [docker network inspect force-systemd-env-159095]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-159095 not found
	
	** /stderr **
	I1003 19:25:49.085141  454580 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 19:25:49.100506  454580 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-3a8a28910ba8 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6e:7a:d0:f8:54:63} reservation:<nil>}
	I1003 19:25:49.100966  454580 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-157403cbb468 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:8a:ee:cb:12:bf:d0} reservation:<nil>}
	I1003 19:25:49.101206  454580 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8d1e24f7a986 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:9e:1b:b1:d8:1a:13} reservation:<nil>}
	I1003 19:25:49.101480  454580 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-9bb6abe107e2 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:06:42:f2:08:ad:b8} reservation:<nil>}
	I1003 19:25:49.101924  454580 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a3a450}
	I1003 19:25:49.101949  454580 network_create.go:124] attempt to create docker network force-systemd-env-159095 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1003 19:25:49.102006  454580 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-159095 force-systemd-env-159095
	I1003 19:25:49.169743  454580 network_create.go:108] docker network force-systemd-env-159095 192.168.85.0/24 created
	I1003 19:25:49.169776  454580 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-env-159095" container
	I1003 19:25:49.169855  454580 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1003 19:25:49.187346  454580 cli_runner.go:164] Run: docker volume create force-systemd-env-159095 --label name.minikube.sigs.k8s.io=force-systemd-env-159095 --label created_by.minikube.sigs.k8s.io=true
	I1003 19:25:49.205736  454580 oci.go:103] Successfully created a docker volume force-systemd-env-159095
	I1003 19:25:49.205842  454580 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-159095-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-159095 --entrypoint /usr/bin/test -v force-systemd-env-159095:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1003 19:25:49.705331  454580 oci.go:107] Successfully prepared a docker volume force-systemd-env-159095
	I1003 19:25:49.705367  454580 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 19:25:49.705387  454580 kic.go:194] Starting extracting preloaded images to volume ...
	I1003 19:25:49.705453  454580 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21625-284583/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-159095:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1003 19:25:54.171223  454580 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21625-284583/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-159095:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.465727636s)
	I1003 19:25:54.171255  454580 kic.go:203] duration metric: took 4.465864647s to extract preloaded images to volume ...
	W1003 19:25:54.171388  454580 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1003 19:25:54.171514  454580 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1003 19:25:54.221950  454580 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-159095 --name force-systemd-env-159095 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-159095 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-159095 --network force-systemd-env-159095 --ip 192.168.85.2 --volume force-systemd-env-159095:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1003 19:25:54.512127  454580 cli_runner.go:164] Run: docker container inspect force-systemd-env-159095 --format={{.State.Running}}
	I1003 19:25:54.536339  454580 cli_runner.go:164] Run: docker container inspect force-systemd-env-159095 --format={{.State.Status}}
	I1003 19:25:54.559306  454580 cli_runner.go:164] Run: docker exec force-systemd-env-159095 stat /var/lib/dpkg/alternatives/iptables
	I1003 19:25:54.607174  454580 oci.go:144] the created container "force-systemd-env-159095" has a running status.
	I1003 19:25:54.607210  454580 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21625-284583/.minikube/machines/force-systemd-env-159095/id_rsa...
	I1003 19:25:54.966956  454580 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-284583/.minikube/machines/force-systemd-env-159095/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1003 19:25:54.967062  454580 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21625-284583/.minikube/machines/force-systemd-env-159095/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1003 19:25:54.992594  454580 cli_runner.go:164] Run: docker container inspect force-systemd-env-159095 --format={{.State.Status}}
	I1003 19:25:55.022469  454580 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1003 19:25:55.022492  454580 kic_runner.go:114] Args: [docker exec --privileged force-systemd-env-159095 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1003 19:25:55.071006  454580 cli_runner.go:164] Run: docker container inspect force-systemd-env-159095 --format={{.State.Status}}
	I1003 19:25:55.090433  454580 machine.go:93] provisionDockerMachine start ...
	I1003 19:25:55.090531  454580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-159095
	I1003 19:25:55.109612  454580 main.go:141] libmachine: Using SSH client type: native
	I1003 19:25:55.109965  454580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33403 <nil> <nil>}
	I1003 19:25:55.109976  454580 main.go:141] libmachine: About to run SSH command:
	hostname
	I1003 19:25:55.110678  454580 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1003 19:25:58.244351  454580 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-env-159095
	
	I1003 19:25:58.244380  454580 ubuntu.go:182] provisioning hostname "force-systemd-env-159095"
	I1003 19:25:58.244459  454580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-159095
	I1003 19:25:58.261988  454580 main.go:141] libmachine: Using SSH client type: native
	I1003 19:25:58.262309  454580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33403 <nil> <nil>}
	I1003 19:25:58.262328  454580 main.go:141] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-159095 && echo "force-systemd-env-159095" | sudo tee /etc/hostname
	I1003 19:25:58.401494  454580 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-env-159095
	
	I1003 19:25:58.401592  454580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-159095
	I1003 19:25:58.423025  454580 main.go:141] libmachine: Using SSH client type: native
	I1003 19:25:58.423333  454580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33403 <nil> <nil>}
	I1003 19:25:58.423357  454580 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-159095' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-159095/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-159095' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 19:25:58.552846  454580 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 19:25:58.552878  454580 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21625-284583/.minikube CaCertPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21625-284583/.minikube}
	I1003 19:25:58.552899  454580 ubuntu.go:190] setting up certificates
	I1003 19:25:58.552909  454580 provision.go:84] configureAuth start
	I1003 19:25:58.552972  454580 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-159095
	I1003 19:25:58.569674  454580 provision.go:143] copyHostCerts
	I1003 19:25:58.569717  454580 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21625-284583/.minikube/ca.pem
	I1003 19:25:58.569755  454580 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-284583/.minikube/ca.pem, removing ...
	I1003 19:25:58.569768  454580 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-284583/.minikube/ca.pem
	I1003 19:25:58.569845  454580 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21625-284583/.minikube/ca.pem (1082 bytes)
	I1003 19:25:58.569938  454580 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21625-284583/.minikube/cert.pem
	I1003 19:25:58.569966  454580 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-284583/.minikube/cert.pem, removing ...
	I1003 19:25:58.569974  454580 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-284583/.minikube/cert.pem
	I1003 19:25:58.570006  454580 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21625-284583/.minikube/cert.pem (1123 bytes)
	I1003 19:25:58.570058  454580 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21625-284583/.minikube/key.pem
	I1003 19:25:58.570083  454580 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-284583/.minikube/key.pem, removing ...
	I1003 19:25:58.570093  454580 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-284583/.minikube/key.pem
	I1003 19:25:58.570120  454580 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21625-284583/.minikube/key.pem (1675 bytes)
	I1003 19:25:58.570176  454580 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21625-284583/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca-key.pem org=jenkins.force-systemd-env-159095 san=[127.0.0.1 192.168.85.2 force-systemd-env-159095 localhost minikube]
	I1003 19:25:58.725391  454580 provision.go:177] copyRemoteCerts
	I1003 19:25:58.725464  454580 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 19:25:58.725532  454580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-159095
	I1003 19:25:58.742899  454580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33403 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/force-systemd-env-159095/id_rsa Username:docker}
	I1003 19:25:58.836771  454580 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-284583/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1003 19:25:58.836872  454580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1003 19:25:58.856421  454580 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-284583/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1003 19:25:58.856490  454580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1003 19:25:58.873878  454580 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1003 19:25:58.873951  454580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 19:25:58.891695  454580 provision.go:87] duration metric: took 338.762309ms to configureAuth
	I1003 19:25:58.891727  454580 ubuntu.go:206] setting minikube options for container-runtime
	I1003 19:25:58.891909  454580 config.go:182] Loaded profile config "force-systemd-env-159095": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 19:25:58.892036  454580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-159095
	I1003 19:25:58.909849  454580 main.go:141] libmachine: Using SSH client type: native
	I1003 19:25:58.910175  454580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33403 <nil> <nil>}
	I1003 19:25:58.910196  454580 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1003 19:25:59.149250  454580 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1003 19:25:59.149275  454580 machine.go:96] duration metric: took 4.058820658s to provisionDockerMachine
	I1003 19:25:59.149286  454580 client.go:171] duration metric: took 10.094965799s to LocalClient.Create
	I1003 19:25:59.149305  454580 start.go:167] duration metric: took 10.095045906s to libmachine.API.Create "force-systemd-env-159095"
	I1003 19:25:59.149313  454580 start.go:293] postStartSetup for "force-systemd-env-159095" (driver="docker")
	I1003 19:25:59.149324  454580 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 19:25:59.149388  454580 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 19:25:59.149448  454580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-159095
	I1003 19:25:59.166472  454580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33403 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/force-systemd-env-159095/id_rsa Username:docker}
	I1003 19:25:59.260632  454580 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 19:25:59.263890  454580 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1003 19:25:59.263920  454580 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1003 19:25:59.263931  454580 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-284583/.minikube/addons for local assets ...
	I1003 19:25:59.263982  454580 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-284583/.minikube/files for local assets ...
	I1003 19:25:59.264083  454580 filesync.go:149] local asset: /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem -> 2864342.pem in /etc/ssl/certs
	I1003 19:25:59.264095  454580 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem -> /etc/ssl/certs/2864342.pem
	I1003 19:25:59.264197  454580 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 19:25:59.271311  454580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem --> /etc/ssl/certs/2864342.pem (1708 bytes)
	I1003 19:25:59.288099  454580 start.go:296] duration metric: took 138.770343ms for postStartSetup
	I1003 19:25:59.288518  454580 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-159095
	I1003 19:25:59.304616  454580 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-env-159095/config.json ...
	I1003 19:25:59.305154  454580 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 19:25:59.305208  454580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-159095
	I1003 19:25:59.321231  454580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33403 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/force-systemd-env-159095/id_rsa Username:docker}
	I1003 19:25:59.414050  454580 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1003 19:25:59.418889  454580 start.go:128] duration metric: took 10.36831561s to createHost
	I1003 19:25:59.418914  454580 start.go:83] releasing machines lock for "force-systemd-env-159095", held for 10.368440084s
	I1003 19:25:59.418992  454580 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-159095
	I1003 19:25:59.438852  454580 ssh_runner.go:195] Run: cat /version.json
	I1003 19:25:59.438921  454580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-159095
	I1003 19:25:59.439197  454580 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 19:25:59.439259  454580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-159095
	I1003 19:25:59.456646  454580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33403 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/force-systemd-env-159095/id_rsa Username:docker}
	I1003 19:25:59.471733  454580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33403 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/force-systemd-env-159095/id_rsa Username:docker}
	I1003 19:25:59.552282  454580 ssh_runner.go:195] Run: systemctl --version
	I1003 19:25:59.640951  454580 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1003 19:25:59.676663  454580 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1003 19:25:59.681571  454580 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 19:25:59.681666  454580 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 19:25:59.710119  454580 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1003 19:25:59.710186  454580 start.go:495] detecting cgroup driver to use...
	I1003 19:25:59.710221  454580 start.go:499] using "systemd" cgroup driver as enforced via flags
	I1003 19:25:59.710292  454580 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 19:25:59.728779  454580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 19:25:59.741907  454580 docker.go:218] disabling cri-docker service (if available) ...
	I1003 19:25:59.741976  454580 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1003 19:25:59.760303  454580 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1003 19:25:59.779405  454580 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1003 19:25:59.910096  454580 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1003 19:26:00.057287  454580 docker.go:234] disabling docker service ...
	I1003 19:26:00.057364  454580 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1003 19:26:00.101800  454580 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1003 19:26:00.120548  454580 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1003 19:26:00.359493  454580 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1003 19:26:00.513222  454580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1003 19:26:00.528783  454580 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 19:26:00.544824  454580 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1003 19:26:00.544920  454580 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:26:00.554512  454580 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1003 19:26:00.554583  454580 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:26:00.564211  454580 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:26:00.573228  454580 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:26:00.582012  454580 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 19:26:00.590364  454580 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:26:00.599344  454580 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:26:00.612579  454580 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:26:00.622182  454580 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 19:26:00.630261  454580 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 19:26:00.637849  454580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 19:26:00.759932  454580 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1003 19:26:00.895938  454580 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1003 19:26:00.896060  454580 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1003 19:26:00.899748  454580 start.go:563] Will wait 60s for crictl version
	I1003 19:26:00.899851  454580 ssh_runner.go:195] Run: which crictl
	I1003 19:26:00.903366  454580 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1003 19:26:00.927598  454580 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1003 19:26:00.927690  454580 ssh_runner.go:195] Run: crio --version
	I1003 19:26:00.956507  454580 ssh_runner.go:195] Run: crio --version
	I1003 19:26:00.987017  454580 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1003 19:26:00.989778  454580 cli_runner.go:164] Run: docker network inspect force-systemd-env-159095 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 19:26:01.006137  454580 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1003 19:26:01.010572  454580 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 19:26:01.020373  454580 kubeadm.go:883] updating cluster {Name:force-systemd-env-159095 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-159095 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1003 19:26:01.020502  454580 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 19:26:01.020562  454580 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 19:26:01.052628  454580 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 19:26:01.052653  454580 crio.go:433] Images already preloaded, skipping extraction
	I1003 19:26:01.052708  454580 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 19:26:01.094457  454580 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 19:26:01.094480  454580 cache_images.go:85] Images are preloaded, skipping loading
	I1003 19:26:01.094488  454580 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1003 19:26:01.094571  454580 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=force-systemd-env-159095 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-159095 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1003 19:26:01.094654  454580 ssh_runner.go:195] Run: crio config
	I1003 19:26:01.193577  454580 cni.go:84] Creating CNI manager for ""
	I1003 19:26:01.193603  454580 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 19:26:01.193624  454580 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1003 19:26:01.193648  454580 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-env-159095 NodeName:force-systemd-env-159095 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1003 19:26:01.193786  454580 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-env-159095"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 19:26:01.193870  454580 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1003 19:26:01.203259  454580 binaries.go:44] Found k8s binaries, skipping transfer
	I1003 19:26:01.203361  454580 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1003 19:26:01.212693  454580 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1003 19:26:01.227330  454580 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 19:26:01.241438  454580 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1003 19:26:01.255108  454580 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1003 19:26:01.259043  454580 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 19:26:01.269128  454580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 19:26:01.394659  454580 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 19:26:01.411684  454580 certs.go:69] Setting up /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-env-159095 for IP: 192.168.85.2
	I1003 19:26:01.411706  454580 certs.go:195] generating shared ca certs ...
	I1003 19:26:01.411723  454580 certs.go:227] acquiring lock for ca certs: {Name:mk5a10e6c921326e9c211447576eaeb893259ba7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:26:01.411941  454580 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21625-284583/.minikube/ca.key
	I1003 19:26:01.412027  454580 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.key
	I1003 19:26:01.412041  454580 certs.go:257] generating profile certs ...
	I1003 19:26:01.412117  454580 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-env-159095/client.key
	I1003 19:26:01.412166  454580 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-env-159095/client.crt with IP's: []
	I1003 19:26:01.766820  454580 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-env-159095/client.crt ...
	I1003 19:26:01.766856  454580 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-env-159095/client.crt: {Name:mk51666518138b5a2e219819702e236e76872a78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:26:01.767096  454580 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-env-159095/client.key ...
	I1003 19:26:01.767116  454580 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-env-159095/client.key: {Name:mkfcf481461d38e104e159039c71e04647b08ed2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:26:01.767219  454580 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-env-159095/apiserver.key.273a6662
	I1003 19:26:01.767240  454580 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-env-159095/apiserver.crt.273a6662 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1003 19:26:02.216829  454580 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-env-159095/apiserver.crt.273a6662 ...
	I1003 19:26:02.216863  454580 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-env-159095/apiserver.crt.273a6662: {Name:mk1f4f655e70df952f523e5ea19eff6145f62906 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:26:02.217058  454580 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-env-159095/apiserver.key.273a6662 ...
	I1003 19:26:02.217072  454580 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-env-159095/apiserver.key.273a6662: {Name:mk8d83f54e21a45bfbebd7f368a1696954444530 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:26:02.217158  454580 certs.go:382] copying /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-env-159095/apiserver.crt.273a6662 -> /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-env-159095/apiserver.crt
	I1003 19:26:02.217239  454580 certs.go:386] copying /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-env-159095/apiserver.key.273a6662 -> /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-env-159095/apiserver.key
	I1003 19:26:02.217303  454580 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-env-159095/proxy-client.key
	I1003 19:26:02.217321  454580 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-env-159095/proxy-client.crt with IP's: []
	I1003 19:26:03.286918  454580 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-env-159095/proxy-client.crt ...
	I1003 19:26:03.286950  454580 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-env-159095/proxy-client.crt: {Name:mk4936fc3ff95065d787c9e24dc27c1b043c8db0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:26:03.287147  454580 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-env-159095/proxy-client.key ...
	I1003 19:26:03.287161  454580 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-env-159095/proxy-client.key: {Name:mke42ad861862d33b4d28d4bb87c7f88e4ef1b0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:26:03.287252  454580 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-284583/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1003 19:26:03.287278  454580 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-284583/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1003 19:26:03.287290  454580 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1003 19:26:03.287301  454580 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1003 19:26:03.287312  454580 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-env-159095/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1003 19:26:03.287328  454580 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-env-159095/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1003 19:26:03.287344  454580 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-env-159095/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1003 19:26:03.287361  454580 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-env-159095/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1003 19:26:03.287415  454580 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/286434.pem (1338 bytes)
	W1003 19:26:03.287455  454580 certs.go:480] ignoring /home/jenkins/minikube-integration/21625-284583/.minikube/certs/286434_empty.pem, impossibly tiny 0 bytes
	I1003 19:26:03.287467  454580 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 19:26:03.287491  454580 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem (1082 bytes)
	I1003 19:26:03.287519  454580 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem (1123 bytes)
	I1003 19:26:03.287546  454580 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/key.pem (1675 bytes)
	I1003 19:26:03.287587  454580 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem (1708 bytes)
	I1003 19:26:03.287618  454580 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/286434.pem -> /usr/share/ca-certificates/286434.pem
	I1003 19:26:03.287634  454580 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem -> /usr/share/ca-certificates/2864342.pem
	I1003 19:26:03.287646  454580 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-284583/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1003 19:26:03.288259  454580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 19:26:03.307697  454580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1003 19:26:03.325577  454580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 19:26:03.343492  454580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1003 19:26:03.360867  454580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-env-159095/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1003 19:26:03.378948  454580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-env-159095/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1003 19:26:03.396233  454580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-env-159095/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 19:26:03.413528  454580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-env-159095/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1003 19:26:03.430564  454580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/certs/286434.pem --> /usr/share/ca-certificates/286434.pem (1338 bytes)
	I1003 19:26:03.447794  454580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem --> /usr/share/ca-certificates/2864342.pem (1708 bytes)
	I1003 19:26:03.464647  454580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 19:26:03.482186  454580 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 19:26:03.495617  454580 ssh_runner.go:195] Run: openssl version
	I1003 19:26:03.502334  454580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/286434.pem && ln -fs /usr/share/ca-certificates/286434.pem /etc/ssl/certs/286434.pem"
	I1003 19:26:03.510842  454580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/286434.pem
	I1003 19:26:03.514769  454580 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  3 18:34 /usr/share/ca-certificates/286434.pem
	I1003 19:26:03.514860  454580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/286434.pem
	I1003 19:26:03.556255  454580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/286434.pem /etc/ssl/certs/51391683.0"
	I1003 19:26:03.564677  454580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2864342.pem && ln -fs /usr/share/ca-certificates/2864342.pem /etc/ssl/certs/2864342.pem"
	I1003 19:26:03.573088  454580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2864342.pem
	I1003 19:26:03.576763  454580 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  3 18:34 /usr/share/ca-certificates/2864342.pem
	I1003 19:26:03.576835  454580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2864342.pem
	I1003 19:26:03.619212  454580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2864342.pem /etc/ssl/certs/3ec20f2e.0"
	I1003 19:26:03.627695  454580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 19:26:03.636292  454580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 19:26:03.640517  454580 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  3 18:27 /usr/share/ca-certificates/minikubeCA.pem
	I1003 19:26:03.640630  454580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 19:26:03.682789  454580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 19:26:03.691160  454580 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 19:26:03.694525  454580 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1003 19:26:03.694580  454580 kubeadm.go:400] StartCluster: {Name:force-systemd-env-159095 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-159095 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 19:26:03.694659  454580 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1003 19:26:03.694722  454580 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1003 19:26:03.721506  454580 cri.go:89] found id: ""
	I1003 19:26:03.721573  454580 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 19:26:03.729182  454580 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1003 19:26:03.736957  454580 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1003 19:26:03.737070  454580 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 19:26:03.745164  454580 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1003 19:26:03.745188  454580 kubeadm.go:157] found existing configuration files:
	
	I1003 19:26:03.745246  454580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1003 19:26:03.753423  454580 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1003 19:26:03.753494  454580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1003 19:26:03.761502  454580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1003 19:26:03.769271  454580 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1003 19:26:03.769367  454580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1003 19:26:03.776549  454580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1003 19:26:03.784475  454580 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1003 19:26:03.784542  454580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1003 19:26:03.792169  454580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1003 19:26:03.800093  454580 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1003 19:26:03.800214  454580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1003 19:26:03.807653  454580 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1003 19:26:03.846939  454580 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1003 19:26:03.847060  454580 kubeadm.go:318] [preflight] Running pre-flight checks
	I1003 19:26:03.875748  454580 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1003 19:26:03.875864  454580 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1003 19:26:03.875918  454580 kubeadm.go:318] OS: Linux
	I1003 19:26:03.875991  454580 kubeadm.go:318] CGROUPS_CPU: enabled
	I1003 19:26:03.876065  454580 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1003 19:26:03.876135  454580 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1003 19:26:03.876201  454580 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1003 19:26:03.876273  454580 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1003 19:26:03.876344  454580 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1003 19:26:03.876414  454580 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1003 19:26:03.876481  454580 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1003 19:26:03.876553  454580 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1003 19:26:03.943371  454580 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1003 19:26:03.943496  454580 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1003 19:26:03.943598  454580 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1003 19:26:03.954937  454580 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1003 19:26:03.958146  454580 out.go:252]   - Generating certificates and keys ...
	I1003 19:26:03.958243  454580 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1003 19:26:03.958316  454580 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1003 19:26:04.891896  454580 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1003 19:26:05.663439  454580 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1003 19:26:05.979405  454580 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1003 19:26:06.329112  454580 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1003 19:26:07.276225  454580 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1003 19:26:07.276609  454580 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [force-systemd-env-159095 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1003 19:26:07.432801  454580 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1003 19:26:07.433347  454580 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-159095 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1003 19:26:08.086133  454580 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1003 19:26:08.342028  454580 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1003 19:26:08.968895  454580 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1003 19:26:08.969186  454580 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1003 19:26:09.745221  454580 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1003 19:26:09.887350  454580 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1003 19:26:10.478301  454580 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1003 19:26:11.115450  454580 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1003 19:26:11.311431  454580 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1003 19:26:11.312258  454580 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1003 19:26:11.314985  454580 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1003 19:26:11.319587  454580 out.go:252]   - Booting up control plane ...
	I1003 19:26:11.319706  454580 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1003 19:26:11.319794  454580 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1003 19:26:11.319869  454580 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1003 19:26:11.336173  454580 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1003 19:26:11.336463  454580 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1003 19:26:11.343972  454580 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1003 19:26:11.344309  454580 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1003 19:26:11.345049  454580 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1003 19:26:11.474759  454580 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1003 19:26:11.474886  454580 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1003 19:26:12.976413  454580 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501726365s
	I1003 19:26:12.985214  454580 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1003 19:26:12.985316  454580 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1003 19:26:12.985619  454580 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1003 19:26:12.985704  454580 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1003 19:28:21.119976  448124 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.00026643s
	I1003 19:28:21.120302  448124 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000266513s
	I1003 19:28:21.120392  448124 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000702943s
	I1003 19:28:21.120405  448124 kubeadm.go:318] 
	I1003 19:28:21.120497  448124 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1003 19:28:21.120583  448124 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1003 19:28:21.120682  448124 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1003 19:28:21.120799  448124 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1003 19:28:21.120879  448124 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1003 19:28:21.120961  448124 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1003 19:28:21.120970  448124 kubeadm.go:318] 
	I1003 19:28:21.124662  448124 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1003 19:28:21.124924  448124 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1003 19:28:21.125044  448124 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1003 19:28:21.125618  448124 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1003 19:28:21.125698  448124 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1003 19:28:21.125825  448124 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-flag-855981 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-flag-855981 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 2.001131159s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.00026643s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000266513s
	[control-plane-check] kube-scheduler is not healthy after 4m0.000702943s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1003 19:28:21.125906  448124 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1003 19:28:21.672371  448124 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 19:28:21.685248  448124 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1003 19:28:21.685314  448124 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 19:28:21.693443  448124 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1003 19:28:21.693463  448124 kubeadm.go:157] found existing configuration files:
	
	I1003 19:28:21.693514  448124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1003 19:28:21.701653  448124 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1003 19:28:21.701719  448124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1003 19:28:21.709339  448124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1003 19:28:21.717339  448124 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1003 19:28:21.717405  448124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1003 19:28:21.725047  448124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1003 19:28:21.732831  448124 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1003 19:28:21.732904  448124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1003 19:28:21.740645  448124 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1003 19:28:21.748531  448124 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1003 19:28:21.748594  448124 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1003 19:28:21.755896  448124 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1003 19:28:21.796277  448124 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1003 19:28:21.796579  448124 kubeadm.go:318] [preflight] Running pre-flight checks
	I1003 19:28:21.821115  448124 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1003 19:28:21.821188  448124 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1003 19:28:21.821225  448124 kubeadm.go:318] OS: Linux
	I1003 19:28:21.821274  448124 kubeadm.go:318] CGROUPS_CPU: enabled
	I1003 19:28:21.821325  448124 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1003 19:28:21.821375  448124 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1003 19:28:21.821425  448124 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1003 19:28:21.821476  448124 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1003 19:28:21.821531  448124 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1003 19:28:21.821579  448124 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1003 19:28:21.821629  448124 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1003 19:28:21.821678  448124 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1003 19:28:21.893400  448124 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1003 19:28:21.893547  448124 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1003 19:28:21.893694  448124 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1003 19:28:21.905164  448124 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1003 19:28:21.911977  448124 out.go:252]   - Generating certificates and keys ...
	I1003 19:28:21.912080  448124 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1003 19:28:21.912158  448124 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1003 19:28:21.912263  448124 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1003 19:28:21.912329  448124 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1003 19:28:21.912405  448124 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1003 19:28:21.912468  448124 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1003 19:28:21.912541  448124 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1003 19:28:21.912610  448124 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1003 19:28:21.912694  448124 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1003 19:28:21.912786  448124 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1003 19:28:21.912832  448124 kubeadm.go:318] [certs] Using the existing "sa" key
	I1003 19:28:21.912900  448124 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1003 19:28:22.346766  448124 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1003 19:28:22.738855  448124 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1003 19:28:23.202029  448124 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1003 19:28:24.078345  448124 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1003 19:28:24.225561  448124 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1003 19:28:24.226320  448124 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1003 19:28:24.228968  448124 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1003 19:28:24.233210  448124 out.go:252]   - Booting up control plane ...
	I1003 19:28:24.233328  448124 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1003 19:28:24.233422  448124 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1003 19:28:24.235948  448124 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1003 19:28:24.251647  448124 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1003 19:28:24.251950  448124 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1003 19:28:24.260445  448124 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1003 19:28:24.260799  448124 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1003 19:28:24.260991  448124 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1003 19:28:24.405021  448124 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1003 19:28:24.405150  448124 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1003 19:28:27.405369  448124 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 3.001387961s
	I1003 19:28:27.413151  448124 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1003 19:28:27.413255  448124 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1003 19:28:27.413350  448124 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1003 19:28:27.413446  448124 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1003 19:30:12.986550  454580 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001161121s
	I1003 19:30:12.986675  454580 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000797529s
	I1003 19:30:12.988014  454580 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001354323s
	I1003 19:30:12.988036  454580 kubeadm.go:318] 
	I1003 19:30:12.988130  454580 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1003 19:30:12.988236  454580 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1003 19:30:12.988383  454580 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1003 19:30:12.988487  454580 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1003 19:30:12.988570  454580 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1003 19:30:12.988657  454580 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1003 19:30:12.988693  454580 kubeadm.go:318] 
	I1003 19:30:12.993483  454580 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1003 19:30:12.993735  454580 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1003 19:30:12.993851  454580 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1003 19:30:12.994434  454580 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1003 19:30:12.994510  454580 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1003 19:30:12.994643  454580 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-env-159095 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-159095 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501726365s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001161121s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000797529s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001354323s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1003 19:30:12.994727  454580 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1003 19:30:13.535012  454580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 19:30:13.549036  454580 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1003 19:30:13.549101  454580 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 19:30:13.558405  454580 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1003 19:30:13.558421  454580 kubeadm.go:157] found existing configuration files:
	
	I1003 19:30:13.558475  454580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1003 19:30:13.566578  454580 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1003 19:30:13.566637  454580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1003 19:30:13.574418  454580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1003 19:30:13.582218  454580 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1003 19:30:13.582289  454580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1003 19:30:13.589696  454580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1003 19:30:13.597779  454580 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1003 19:30:13.597850  454580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1003 19:30:13.605196  454580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1003 19:30:13.613250  454580 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1003 19:30:13.613310  454580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1003 19:30:13.620867  454580 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1003 19:30:13.684886  454580 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1003 19:30:13.685182  454580 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1003 19:30:13.763851  454580 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1003 19:32:27.413964  448124 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.000708724s
	I1003 19:32:27.414101  448124 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000529448s
	I1003 19:32:27.414633  448124 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000761263s
	I1003 19:32:27.414655  448124 kubeadm.go:318] 
	I1003 19:32:27.414751  448124 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1003 19:32:27.414838  448124 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1003 19:32:27.414935  448124 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1003 19:32:27.415036  448124 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1003 19:32:27.415124  448124 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1003 19:32:27.415214  448124 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1003 19:32:27.415226  448124 kubeadm.go:318] 
	I1003 19:32:27.419315  448124 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1003 19:32:27.419558  448124 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1003 19:32:27.419673  448124 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1003 19:32:27.420237  448124 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	I1003 19:32:27.420346  448124 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1003 19:32:27.420405  448124 kubeadm.go:402] duration metric: took 8m16.811540578s to StartCluster
	I1003 19:32:27.420447  448124 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 19:32:27.420511  448124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 19:32:27.445962  448124 cri.go:89] found id: ""
	I1003 19:32:27.446000  448124 logs.go:282] 0 containers: []
	W1003 19:32:27.446010  448124 logs.go:284] No container was found matching "kube-apiserver"
	I1003 19:32:27.446018  448124 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 19:32:27.446077  448124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 19:32:27.478379  448124 cri.go:89] found id: ""
	I1003 19:32:27.478406  448124 logs.go:282] 0 containers: []
	W1003 19:32:27.478415  448124 logs.go:284] No container was found matching "etcd"
	I1003 19:32:27.478421  448124 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 19:32:27.478482  448124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 19:32:27.509179  448124 cri.go:89] found id: ""
	I1003 19:32:27.509205  448124 logs.go:282] 0 containers: []
	W1003 19:32:27.509214  448124 logs.go:284] No container was found matching "coredns"
	I1003 19:32:27.509221  448124 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 19:32:27.509278  448124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 19:32:27.534898  448124 cri.go:89] found id: ""
	I1003 19:32:27.534923  448124 logs.go:282] 0 containers: []
	W1003 19:32:27.534933  448124 logs.go:284] No container was found matching "kube-scheduler"
	I1003 19:32:27.534940  448124 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 19:32:27.535031  448124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 19:32:27.561607  448124 cri.go:89] found id: ""
	I1003 19:32:27.561633  448124 logs.go:282] 0 containers: []
	W1003 19:32:27.561642  448124 logs.go:284] No container was found matching "kube-proxy"
	I1003 19:32:27.561649  448124 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 19:32:27.561712  448124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 19:32:27.587705  448124 cri.go:89] found id: ""
	I1003 19:32:27.587735  448124 logs.go:282] 0 containers: []
	W1003 19:32:27.587744  448124 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 19:32:27.587752  448124 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 19:32:27.587811  448124 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 19:32:27.614585  448124 cri.go:89] found id: ""
	I1003 19:32:27.614613  448124 logs.go:282] 0 containers: []
	W1003 19:32:27.614622  448124 logs.go:284] No container was found matching "kindnet"
	I1003 19:32:27.614632  448124 logs.go:123] Gathering logs for kubelet ...
	I1003 19:32:27.614643  448124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 19:32:27.703594  448124 logs.go:123] Gathering logs for dmesg ...
	I1003 19:32:27.703630  448124 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 19:32:27.719856  448124 logs.go:123] Gathering logs for describe nodes ...
	I1003 19:32:27.719887  448124 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 19:32:27.790531  448124 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 19:32:27.782685    2343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 19:32:27.783378    2343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 19:32:27.784615    2343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 19:32:27.785128    2343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 19:32:27.786586    2343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 19:32:27.782685    2343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 19:32:27.783378    2343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 19:32:27.784615    2343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 19:32:27.785128    2343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 19:32:27.786586    2343 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 19:32:27.790554  448124 logs.go:123] Gathering logs for CRI-O ...
	I1003 19:32:27.790568  448124 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 19:32:27.869907  448124 logs.go:123] Gathering logs for container status ...
	I1003 19:32:27.869944  448124 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1003 19:32:27.899735  448124 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 3.001387961s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000708724s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000529448s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000761263s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1003 19:32:27.899791  448124 out.go:285] * 
	W1003 19:32:27.899845  448124 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 3.001387961s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000708724s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000529448s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000761263s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1003 19:32:27.899865  448124 out.go:285] * 
	W1003 19:32:27.902016  448124 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 19:32:27.909498  448124 out.go:203] 
	W1003 19:32:27.913285  448124 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 3.001387961s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-scheduler is not healthy after 4m0.000708724s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000529448s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000761263s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused, kube-apiserver check failed at https://192.168.76.2:8443/livez: client rate limiter Wait returned an error: context deadline exceeded, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1003 19:32:27.913318  448124 out.go:285] * 
	I1003 19:32:27.916409  448124 out.go:203] 
	
	
	==> CRI-O <==
	Oct 03 19:32:19 force-systemd-flag-855981 crio[833]: time="2025-10-03T19:32:19.011537774Z" level=info msg="createCtr: removing container edab375bc80a77fb0aade619fa083a47a33cca56b7eef2d014cc269fdb02898f" id=82113415-4727-40e2-9aa8-c4304727d49b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:32:19 force-systemd-flag-855981 crio[833]: time="2025-10-03T19:32:19.011572835Z" level=info msg="createCtr: deleting container edab375bc80a77fb0aade619fa083a47a33cca56b7eef2d014cc269fdb02898f from storage" id=82113415-4727-40e2-9aa8-c4304727d49b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:32:19 force-systemd-flag-855981 crio[833]: time="2025-10-03T19:32:19.014467071Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-force-systemd-flag-855981_kube-system_13aa32d45411921444a6c51068551c1c_0" id=82113415-4727-40e2-9aa8-c4304727d49b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:32:24 force-systemd-flag-855981 crio[833]: time="2025-10-03T19:32:24.990428325Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=cc898f25-98ca-4efb-8932-7577232ebb62 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 19:32:24 force-systemd-flag-855981 crio[833]: time="2025-10-03T19:32:24.991324217Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.34.1" id=d4d7ac38-d30a-4093-a58c-11986fc2c202 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 19:32:24 force-systemd-flag-855981 crio[833]: time="2025-10-03T19:32:24.992168744Z" level=info msg="Creating container: kube-system/kube-scheduler-force-systemd-flag-855981/kube-scheduler" id=70a8bf0a-335d-4d4d-9ee2-8a56e71ff032 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:32:24 force-systemd-flag-855981 crio[833]: time="2025-10-03T19:32:24.992430943Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 19:32:24 force-systemd-flag-855981 crio[833]: time="2025-10-03T19:32:24.997630942Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 19:32:24 force-systemd-flag-855981 crio[833]: time="2025-10-03T19:32:24.998383463Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 19:32:25 force-systemd-flag-855981 crio[833]: time="2025-10-03T19:32:25.010105026Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=70a8bf0a-335d-4d4d-9ee2-8a56e71ff032 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:32:25 force-systemd-flag-855981 crio[833]: time="2025-10-03T19:32:25.011534949Z" level=info msg="createCtr: deleting container ID 112753175e165c6137b190c92a83325d5fc1d48705d107ade1c84754ccaf45c9 from idIndex" id=70a8bf0a-335d-4d4d-9ee2-8a56e71ff032 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:32:25 force-systemd-flag-855981 crio[833]: time="2025-10-03T19:32:25.011587759Z" level=info msg="createCtr: removing container 112753175e165c6137b190c92a83325d5fc1d48705d107ade1c84754ccaf45c9" id=70a8bf0a-335d-4d4d-9ee2-8a56e71ff032 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:32:25 force-systemd-flag-855981 crio[833]: time="2025-10-03T19:32:25.01162858Z" level=info msg="createCtr: deleting container 112753175e165c6137b190c92a83325d5fc1d48705d107ade1c84754ccaf45c9 from storage" id=70a8bf0a-335d-4d4d-9ee2-8a56e71ff032 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:32:25 force-systemd-flag-855981 crio[833]: time="2025-10-03T19:32:25.014376721Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-force-systemd-flag-855981_kube-system_933519c454d90dd4024bd0e16459e444_0" id=70a8bf0a-335d-4d4d-9ee2-8a56e71ff032 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:32:27 force-systemd-flag-855981 crio[833]: time="2025-10-03T19:32:27.991815211Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=da4c80c3-5cdc-4f05-bdfb-f06e441f5150 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 19:32:27 force-systemd-flag-855981 crio[833]: time="2025-10-03T19:32:27.993318818Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.1" id=7edb1fad-321f-4cb1-bf16-7c4fd82a39c7 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 19:32:27 force-systemd-flag-855981 crio[833]: time="2025-10-03T19:32:27.994739749Z" level=info msg="Creating container: kube-system/kube-controller-manager-force-systemd-flag-855981/kube-controller-manager" id=48a8d25a-2ed6-4c47-8847-376334b3e36c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:32:27 force-systemd-flag-855981 crio[833]: time="2025-10-03T19:32:27.99507948Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 19:32:28 force-systemd-flag-855981 crio[833]: time="2025-10-03T19:32:28.007119825Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 19:32:28 force-systemd-flag-855981 crio[833]: time="2025-10-03T19:32:28.008108051Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 19:32:28 force-systemd-flag-855981 crio[833]: time="2025-10-03T19:32:28.023134572Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=48a8d25a-2ed6-4c47-8847-376334b3e36c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:32:28 force-systemd-flag-855981 crio[833]: time="2025-10-03T19:32:28.028042447Z" level=info msg="createCtr: deleting container ID 3b35e3a54c291716d4d5c8e7dd7ca11723d5cda990d8b59ded74754c24ee6d30 from idIndex" id=48a8d25a-2ed6-4c47-8847-376334b3e36c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:32:28 force-systemd-flag-855981 crio[833]: time="2025-10-03T19:32:28.028230684Z" level=info msg="createCtr: removing container 3b35e3a54c291716d4d5c8e7dd7ca11723d5cda990d8b59ded74754c24ee6d30" id=48a8d25a-2ed6-4c47-8847-376334b3e36c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:32:28 force-systemd-flag-855981 crio[833]: time="2025-10-03T19:32:28.028397784Z" level=info msg="createCtr: deleting container 3b35e3a54c291716d4d5c8e7dd7ca11723d5cda990d8b59ded74754c24ee6d30 from storage" id=48a8d25a-2ed6-4c47-8847-376334b3e36c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:32:28 force-systemd-flag-855981 crio[833]: time="2025-10-03T19:32:28.037823619Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-force-systemd-flag-855981_kube-system_2fce964a23a928b8d37cb6106b912a18_0" id=48a8d25a-2ed6-4c47-8847-376334b3e36c name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 19:32:29.288292    2470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 19:32:29.289127    2470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 19:32:29.290687    2470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 19:32:29.291226    2470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 19:32:29.292852    2470 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 3 18:58] overlayfs: idmapped layers are currently not supported
	[Oct 3 18:59] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:00] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:05] overlayfs: idmapped layers are currently not supported
	[ +33.149550] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:07] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:08] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:09] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:10] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:11] overlayfs: idmapped layers are currently not supported
	[  +4.287643] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:12] overlayfs: idmapped layers are currently not supported
	[ +24.839009] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:13] overlayfs: idmapped layers are currently not supported
	[ +26.493253] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:15] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:16] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:17] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000010] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Oct 3 19:18] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:20] overlayfs: idmapped layers are currently not supported
	[ +32.018892] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:22] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:24] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:26] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 19:32:29 up  2:15,  0 user,  load average: 0.01, 0.75, 1.63
	Linux force-systemd-flag-855981 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 03 19:32:19 force-systemd-flag-855981 kubelet[1768]: E1003 19:32:19.203574    1768 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.76.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.76.2:8443: connect: connection refused" event="&Event{ObjectMeta:{force-systemd-flag-855981.186b11d9e653ffd5  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:force-systemd-flag-855981,UID:force-systemd-flag-855981,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node force-systemd-flag-855981 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:force-systemd-flag-855981,},FirstTimestamp:2025-10-03 19:28:26.988150741 +0000 UTC m=+2.584933609,LastTimestamp:2025-10-03 19:28:26.988150741 +0000 UTC m=+2.584933609,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:k
ubelet,ReportingInstance:force-systemd-flag-855981,}"
	Oct 03 19:32:22 force-systemd-flag-855981 kubelet[1768]: E1003 19:32:22.750705    1768 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.76.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	Oct 03 19:32:23 force-systemd-flag-855981 kubelet[1768]: E1003 19:32:23.588787    1768 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.76.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/force-systemd-flag-855981?timeout=10s\": dial tcp 192.168.76.2:8443: connect: connection refused" interval="7s"
	Oct 03 19:32:23 force-systemd-flag-855981 kubelet[1768]: I1003 19:32:23.776028    1768 kubelet_node_status.go:75] "Attempting to register node" node="force-systemd-flag-855981"
	Oct 03 19:32:23 force-systemd-flag-855981 kubelet[1768]: E1003 19:32:23.776584    1768 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.76.2:8443/api/v1/nodes\": dial tcp 192.168.76.2:8443: connect: connection refused" node="force-systemd-flag-855981"
	Oct 03 19:32:24 force-systemd-flag-855981 kubelet[1768]: E1003 19:32:24.989903    1768 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"force-systemd-flag-855981\" not found" node="force-systemd-flag-855981"
	Oct 03 19:32:25 force-systemd-flag-855981 kubelet[1768]: E1003 19:32:25.014694    1768 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 03 19:32:25 force-systemd-flag-855981 kubelet[1768]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 03 19:32:25 force-systemd-flag-855981 kubelet[1768]:  > podSandboxID="cda9d3c2368d34847111c9a11be0e627be4752a3061a50db61eb90c1eeb8ff1f"
	Oct 03 19:32:25 force-systemd-flag-855981 kubelet[1768]: E1003 19:32:25.014802    1768 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 03 19:32:25 force-systemd-flag-855981 kubelet[1768]:         container kube-scheduler start failed in pod kube-scheduler-force-systemd-flag-855981_kube-system(933519c454d90dd4024bd0e16459e444): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 19:32:25 force-systemd-flag-855981 kubelet[1768]:  > logger="UnhandledError"
	Oct 03 19:32:25 force-systemd-flag-855981 kubelet[1768]: E1003 19:32:25.014835    1768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-force-systemd-flag-855981" podUID="933519c454d90dd4024bd0e16459e444"
	Oct 03 19:32:26 force-systemd-flag-855981 kubelet[1768]: E1003 19:32:26.387662    1768 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.76.2:8443/api/v1/nodes?fieldSelector=metadata.name%3Dforce-systemd-flag-855981&limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	Oct 03 19:32:26 force-systemd-flag-855981 kubelet[1768]: E1003 19:32:26.503831    1768 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.76.2:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.76.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	Oct 03 19:32:27 force-systemd-flag-855981 kubelet[1768]: E1003 19:32:27.017060    1768 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"force-systemd-flag-855981\" not found"
	Oct 03 19:32:27 force-systemd-flag-855981 kubelet[1768]: E1003 19:32:27.990841    1768 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"force-systemd-flag-855981\" not found" node="force-systemd-flag-855981"
	Oct 03 19:32:28 force-systemd-flag-855981 kubelet[1768]: E1003 19:32:28.038447    1768 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 03 19:32:28 force-systemd-flag-855981 kubelet[1768]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 03 19:32:28 force-systemd-flag-855981 kubelet[1768]:  > podSandboxID="d2e9cbddabc70ddfc4e92b943c8982d6939ebad572c5fdd6ea816389f2d8f930"
	Oct 03 19:32:28 force-systemd-flag-855981 kubelet[1768]: E1003 19:32:28.038570    1768 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 03 19:32:28 force-systemd-flag-855981 kubelet[1768]:         container kube-controller-manager start failed in pod kube-controller-manager-force-systemd-flag-855981_kube-system(2fce964a23a928b8d37cb6106b912a18): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 19:32:28 force-systemd-flag-855981 kubelet[1768]:  > logger="UnhandledError"
	Oct 03 19:32:28 force-systemd-flag-855981 kubelet[1768]: E1003 19:32:28.038610    1768 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-controller-manager-force-systemd-flag-855981" podUID="2fce964a23a928b8d37cb6106b912a18"
	Oct 03 19:32:29 force-systemd-flag-855981 kubelet[1768]: E1003 19:32:29.204433    1768 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.76.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.76.2:8443: connect: connection refused" event="&Event{ObjectMeta:{force-systemd-flag-855981.186b11d9e653ffd5  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:force-systemd-flag-855981,UID:force-systemd-flag-855981,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node force-systemd-flag-855981 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:force-systemd-flag-855981,},FirstTimestamp:2025-10-03 19:28:26.988150741 +0000 UTC m=+2.584933609,LastTimestamp:2025-10-03 19:28:26.988150741 +0000 UTC m=+2.584933609,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:k
ubelet,ReportingInstance:force-systemd-flag-855981,}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-flag-855981 -n force-systemd-flag-855981
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-flag-855981 -n force-systemd-flag-855981: exit status 6 (318.923763ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1003 19:32:29.729903  458603 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-flag-855981" does not appear in /home/jenkins/minikube-integration/21625-284583/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "force-systemd-flag-855981" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-flag-855981" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-855981
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-855981: (1.993256683s)
--- FAIL: TestForceSystemdFlag (516.68s)

                                                
                                    
x
+
TestForceSystemdEnv (513.5s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-159095 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1003 19:26:51.964290  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/functional-680560/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 19:29:45.417833  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 19:31:51.958327  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/functional-680560/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:155: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p force-systemd-env-159095 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: exit status 80 (8m30.081275209s)

                                                
                                                
-- stdout --
	* [force-systemd-env-159095] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21625
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21625-284583/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-284583/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=true
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "force-systemd-env-159095" primary control-plane node in "force-systemd-env-159095" cluster
	* Pulling base image v0.0.48-1759382731-21643 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 19:25:48.814935  454580 out.go:360] Setting OutFile to fd 1 ...
	I1003 19:25:48.815053  454580 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 19:25:48.815063  454580 out.go:374] Setting ErrFile to fd 2...
	I1003 19:25:48.815069  454580 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 19:25:48.815316  454580 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-284583/.minikube/bin
	I1003 19:25:48.815712  454580 out.go:368] Setting JSON to false
	I1003 19:25:48.816576  454580 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7700,"bootTime":1759511849,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1003 19:25:48.816644  454580 start.go:140] virtualization:  
	I1003 19:25:48.821913  454580 out.go:179] * [force-systemd-env-159095] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1003 19:25:48.824939  454580 out.go:179]   - MINIKUBE_LOCATION=21625
	I1003 19:25:48.825013  454580 notify.go:220] Checking for updates...
	I1003 19:25:48.830835  454580 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 19:25:48.833636  454580 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21625-284583/kubeconfig
	I1003 19:25:48.836453  454580 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-284583/.minikube
	I1003 19:25:48.839182  454580 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1003 19:25:48.842048  454580 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=true
	I1003 19:25:48.845415  454580 config.go:182] Loaded profile config "force-systemd-flag-855981": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 19:25:48.845523  454580 driver.go:421] Setting default libvirt URI to qemu:///system
	I1003 19:25:48.879322  454580 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1003 19:25:48.879461  454580 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 19:25:48.946789  454580 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-03 19:25:48.937306176 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1003 19:25:48.946902  454580 docker.go:318] overlay module found
	I1003 19:25:48.950002  454580 out.go:179] * Using the docker driver based on user configuration
	I1003 19:25:48.952919  454580 start.go:304] selected driver: docker
	I1003 19:25:48.952941  454580 start.go:924] validating driver "docker" against <nil>
	I1003 19:25:48.952956  454580 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 19:25:48.953668  454580 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 19:25:49.013568  454580 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-03 19:25:49.00374877 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1003 19:25:49.013736  454580 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1003 19:25:49.013961  454580 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1003 19:25:49.016864  454580 out.go:179] * Using Docker driver with root privileges
	I1003 19:25:49.019673  454580 cni.go:84] Creating CNI manager for ""
	I1003 19:25:49.019747  454580 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 19:25:49.019760  454580 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1003 19:25:49.019838  454580 start.go:348] cluster config:
	{Name:force-systemd-env-159095 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-159095 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 19:25:49.022979  454580 out.go:179] * Starting "force-systemd-env-159095" primary control-plane node in "force-systemd-env-159095" cluster
	I1003 19:25:49.025847  454580 cache.go:123] Beginning downloading kic base image for docker with crio
	I1003 19:25:49.028784  454580 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1003 19:25:49.031644  454580 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 19:25:49.031704  454580 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21625-284583/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1003 19:25:49.031718  454580 cache.go:58] Caching tarball of preloaded images
	I1003 19:25:49.031730  454580 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1003 19:25:49.031803  454580 preload.go:233] Found /home/jenkins/minikube-integration/21625-284583/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1003 19:25:49.031813  454580 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1003 19:25:49.031925  454580 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-env-159095/config.json ...
	I1003 19:25:49.031942  454580 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-env-159095/config.json: {Name:mkccfe0252f86bc3641a86c319cb32a0e2dd05e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:25:49.050276  454580 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1003 19:25:49.050303  454580 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1003 19:25:49.050326  454580 cache.go:232] Successfully downloaded all kic artifacts
	I1003 19:25:49.050350  454580 start.go:360] acquireMachinesLock for force-systemd-env-159095: {Name:mk3d73d31c60e2c8140d6014661a31ecf05d19cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 19:25:49.050460  454580 start.go:364] duration metric: took 89.626µs to acquireMachinesLock for "force-systemd-env-159095"
	I1003 19:25:49.050492  454580 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-159095 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-159095 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1003 19:25:49.050559  454580 start.go:125] createHost starting for "" (driver="docker")
	I1003 19:25:49.054048  454580 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1003 19:25:49.054261  454580 start.go:159] libmachine.API.Create for "force-systemd-env-159095" (driver="docker")
	I1003 19:25:49.054309  454580 client.go:168] LocalClient.Create starting
	I1003 19:25:49.054379  454580 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem
	I1003 19:25:49.054420  454580 main.go:141] libmachine: Decoding PEM data...
	I1003 19:25:49.054446  454580 main.go:141] libmachine: Parsing certificate...
	I1003 19:25:49.054506  454580 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem
	I1003 19:25:49.054528  454580 main.go:141] libmachine: Decoding PEM data...
	I1003 19:25:49.054542  454580 main.go:141] libmachine: Parsing certificate...
	I1003 19:25:49.054914  454580 cli_runner.go:164] Run: docker network inspect force-systemd-env-159095 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1003 19:25:49.070363  454580 cli_runner.go:211] docker network inspect force-systemd-env-159095 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1003 19:25:49.070449  454580 network_create.go:284] running [docker network inspect force-systemd-env-159095] to gather additional debugging logs...
	I1003 19:25:49.070465  454580 cli_runner.go:164] Run: docker network inspect force-systemd-env-159095
	W1003 19:25:49.085002  454580 cli_runner.go:211] docker network inspect force-systemd-env-159095 returned with exit code 1
	I1003 19:25:49.085035  454580 network_create.go:287] error running [docker network inspect force-systemd-env-159095]: docker network inspect force-systemd-env-159095: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-env-159095 not found
	I1003 19:25:49.085049  454580 network_create.go:289] output of [docker network inspect force-systemd-env-159095]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-env-159095 not found
	
	** /stderr **
	I1003 19:25:49.085141  454580 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 19:25:49.100506  454580 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-3a8a28910ba8 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6e:7a:d0:f8:54:63} reservation:<nil>}
	I1003 19:25:49.100966  454580 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-157403cbb468 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:8a:ee:cb:12:bf:d0} reservation:<nil>}
	I1003 19:25:49.101206  454580 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8d1e24f7a986 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:9e:1b:b1:d8:1a:13} reservation:<nil>}
	I1003 19:25:49.101480  454580 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-9bb6abe107e2 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:06:42:f2:08:ad:b8} reservation:<nil>}
	I1003 19:25:49.101924  454580 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a3a450}
	I1003 19:25:49.101949  454580 network_create.go:124] attempt to create docker network force-systemd-env-159095 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1003 19:25:49.102006  454580 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-env-159095 force-systemd-env-159095
	I1003 19:25:49.169743  454580 network_create.go:108] docker network force-systemd-env-159095 192.168.85.0/24 created
	I1003 19:25:49.169776  454580 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-env-159095" container
	I1003 19:25:49.169855  454580 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1003 19:25:49.187346  454580 cli_runner.go:164] Run: docker volume create force-systemd-env-159095 --label name.minikube.sigs.k8s.io=force-systemd-env-159095 --label created_by.minikube.sigs.k8s.io=true
	I1003 19:25:49.205736  454580 oci.go:103] Successfully created a docker volume force-systemd-env-159095
	I1003 19:25:49.205842  454580 cli_runner.go:164] Run: docker run --rm --name force-systemd-env-159095-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-159095 --entrypoint /usr/bin/test -v force-systemd-env-159095:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1003 19:25:49.705331  454580 oci.go:107] Successfully prepared a docker volume force-systemd-env-159095
	I1003 19:25:49.705367  454580 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 19:25:49.705387  454580 kic.go:194] Starting extracting preloaded images to volume ...
	I1003 19:25:49.705453  454580 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21625-284583/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-159095:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1003 19:25:54.171223  454580 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21625-284583/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-env-159095:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.465727636s)
	I1003 19:25:54.171255  454580 kic.go:203] duration metric: took 4.465864647s to extract preloaded images to volume ...
	W1003 19:25:54.171388  454580 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1003 19:25:54.171514  454580 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1003 19:25:54.221950  454580 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-env-159095 --name force-systemd-env-159095 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-env-159095 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-env-159095 --network force-systemd-env-159095 --ip 192.168.85.2 --volume force-systemd-env-159095:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1003 19:25:54.512127  454580 cli_runner.go:164] Run: docker container inspect force-systemd-env-159095 --format={{.State.Running}}
	I1003 19:25:54.536339  454580 cli_runner.go:164] Run: docker container inspect force-systemd-env-159095 --format={{.State.Status}}
	I1003 19:25:54.559306  454580 cli_runner.go:164] Run: docker exec force-systemd-env-159095 stat /var/lib/dpkg/alternatives/iptables
	I1003 19:25:54.607174  454580 oci.go:144] the created container "force-systemd-env-159095" has a running status.
	I1003 19:25:54.607210  454580 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21625-284583/.minikube/machines/force-systemd-env-159095/id_rsa...
	I1003 19:25:54.966956  454580 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-284583/.minikube/machines/force-systemd-env-159095/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1003 19:25:54.967062  454580 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21625-284583/.minikube/machines/force-systemd-env-159095/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1003 19:25:54.992594  454580 cli_runner.go:164] Run: docker container inspect force-systemd-env-159095 --format={{.State.Status}}
	I1003 19:25:55.022469  454580 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1003 19:25:55.022492  454580 kic_runner.go:114] Args: [docker exec --privileged force-systemd-env-159095 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1003 19:25:55.071006  454580 cli_runner.go:164] Run: docker container inspect force-systemd-env-159095 --format={{.State.Status}}
	I1003 19:25:55.090433  454580 machine.go:93] provisionDockerMachine start ...
	I1003 19:25:55.090531  454580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-159095
	I1003 19:25:55.109612  454580 main.go:141] libmachine: Using SSH client type: native
	I1003 19:25:55.109965  454580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33403 <nil> <nil>}
	I1003 19:25:55.109976  454580 main.go:141] libmachine: About to run SSH command:
	hostname
	I1003 19:25:55.110678  454580 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1003 19:25:58.244351  454580 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-env-159095
	
	I1003 19:25:58.244380  454580 ubuntu.go:182] provisioning hostname "force-systemd-env-159095"
	I1003 19:25:58.244459  454580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-159095
	I1003 19:25:58.261988  454580 main.go:141] libmachine: Using SSH client type: native
	I1003 19:25:58.262309  454580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33403 <nil> <nil>}
	I1003 19:25:58.262328  454580 main.go:141] libmachine: About to run SSH command:
	sudo hostname force-systemd-env-159095 && echo "force-systemd-env-159095" | sudo tee /etc/hostname
	I1003 19:25:58.401494  454580 main.go:141] libmachine: SSH cmd err, output: <nil>: force-systemd-env-159095
	
	I1003 19:25:58.401592  454580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-159095
	I1003 19:25:58.423025  454580 main.go:141] libmachine: Using SSH client type: native
	I1003 19:25:58.423333  454580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33403 <nil> <nil>}
	I1003 19:25:58.423357  454580 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-env-159095' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-env-159095/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-env-159095' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 19:25:58.552846  454580 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 19:25:58.552878  454580 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21625-284583/.minikube CaCertPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21625-284583/.minikube}
	I1003 19:25:58.552899  454580 ubuntu.go:190] setting up certificates
	I1003 19:25:58.552909  454580 provision.go:84] configureAuth start
	I1003 19:25:58.552972  454580 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-159095
	I1003 19:25:58.569674  454580 provision.go:143] copyHostCerts
	I1003 19:25:58.569717  454580 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21625-284583/.minikube/ca.pem
	I1003 19:25:58.569755  454580 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-284583/.minikube/ca.pem, removing ...
	I1003 19:25:58.569768  454580 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-284583/.minikube/ca.pem
	I1003 19:25:58.569845  454580 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21625-284583/.minikube/ca.pem (1082 bytes)
	I1003 19:25:58.569938  454580 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21625-284583/.minikube/cert.pem
	I1003 19:25:58.569966  454580 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-284583/.minikube/cert.pem, removing ...
	I1003 19:25:58.569974  454580 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-284583/.minikube/cert.pem
	I1003 19:25:58.570006  454580 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21625-284583/.minikube/cert.pem (1123 bytes)
	I1003 19:25:58.570058  454580 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21625-284583/.minikube/key.pem
	I1003 19:25:58.570083  454580 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-284583/.minikube/key.pem, removing ...
	I1003 19:25:58.570093  454580 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-284583/.minikube/key.pem
	I1003 19:25:58.570120  454580 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21625-284583/.minikube/key.pem (1675 bytes)
	I1003 19:25:58.570176  454580 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21625-284583/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca-key.pem org=jenkins.force-systemd-env-159095 san=[127.0.0.1 192.168.85.2 force-systemd-env-159095 localhost minikube]
	I1003 19:25:58.725391  454580 provision.go:177] copyRemoteCerts
	I1003 19:25:58.725464  454580 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 19:25:58.725532  454580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-159095
	I1003 19:25:58.742899  454580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33403 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/force-systemd-env-159095/id_rsa Username:docker}
	I1003 19:25:58.836771  454580 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-284583/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1003 19:25:58.836872  454580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1003 19:25:58.856421  454580 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-284583/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1003 19:25:58.856490  454580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1003 19:25:58.873878  454580 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1003 19:25:58.873951  454580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 19:25:58.891695  454580 provision.go:87] duration metric: took 338.762309ms to configureAuth
	I1003 19:25:58.891727  454580 ubuntu.go:206] setting minikube options for container-runtime
	I1003 19:25:58.891909  454580 config.go:182] Loaded profile config "force-systemd-env-159095": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 19:25:58.892036  454580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-159095
	I1003 19:25:58.909849  454580 main.go:141] libmachine: Using SSH client type: native
	I1003 19:25:58.910175  454580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33403 <nil> <nil>}
	I1003 19:25:58.910196  454580 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1003 19:25:59.149250  454580 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1003 19:25:59.149275  454580 machine.go:96] duration metric: took 4.058820658s to provisionDockerMachine
	I1003 19:25:59.149286  454580 client.go:171] duration metric: took 10.094965799s to LocalClient.Create
	I1003 19:25:59.149305  454580 start.go:167] duration metric: took 10.095045906s to libmachine.API.Create "force-systemd-env-159095"
	I1003 19:25:59.149313  454580 start.go:293] postStartSetup for "force-systemd-env-159095" (driver="docker")
	I1003 19:25:59.149324  454580 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 19:25:59.149388  454580 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 19:25:59.149448  454580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-159095
	I1003 19:25:59.166472  454580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33403 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/force-systemd-env-159095/id_rsa Username:docker}
	I1003 19:25:59.260632  454580 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 19:25:59.263890  454580 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1003 19:25:59.263920  454580 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1003 19:25:59.263931  454580 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-284583/.minikube/addons for local assets ...
	I1003 19:25:59.263982  454580 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-284583/.minikube/files for local assets ...
	I1003 19:25:59.264083  454580 filesync.go:149] local asset: /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem -> 2864342.pem in /etc/ssl/certs
	I1003 19:25:59.264095  454580 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem -> /etc/ssl/certs/2864342.pem
	I1003 19:25:59.264197  454580 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 19:25:59.271311  454580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem --> /etc/ssl/certs/2864342.pem (1708 bytes)
	I1003 19:25:59.288099  454580 start.go:296] duration metric: took 138.770343ms for postStartSetup
	I1003 19:25:59.288518  454580 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-159095
	I1003 19:25:59.304616  454580 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-env-159095/config.json ...
	I1003 19:25:59.305154  454580 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 19:25:59.305208  454580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-159095
	I1003 19:25:59.321231  454580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33403 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/force-systemd-env-159095/id_rsa Username:docker}
	I1003 19:25:59.414050  454580 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1003 19:25:59.418889  454580 start.go:128] duration metric: took 10.36831561s to createHost
	I1003 19:25:59.418914  454580 start.go:83] releasing machines lock for "force-systemd-env-159095", held for 10.368440084s
	I1003 19:25:59.418992  454580 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-env-159095
	I1003 19:25:59.438852  454580 ssh_runner.go:195] Run: cat /version.json
	I1003 19:25:59.438921  454580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-159095
	I1003 19:25:59.439197  454580 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 19:25:59.439259  454580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-env-159095
	I1003 19:25:59.456646  454580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33403 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/force-systemd-env-159095/id_rsa Username:docker}
	I1003 19:25:59.471733  454580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33403 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/force-systemd-env-159095/id_rsa Username:docker}
	I1003 19:25:59.552282  454580 ssh_runner.go:195] Run: systemctl --version
	I1003 19:25:59.640951  454580 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1003 19:25:59.676663  454580 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1003 19:25:59.681571  454580 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 19:25:59.681666  454580 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 19:25:59.710119  454580 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1003 19:25:59.710186  454580 start.go:495] detecting cgroup driver to use...
	I1003 19:25:59.710221  454580 start.go:499] using "systemd" cgroup driver as enforced via flags
	I1003 19:25:59.710292  454580 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 19:25:59.728779  454580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 19:25:59.741907  454580 docker.go:218] disabling cri-docker service (if available) ...
	I1003 19:25:59.741976  454580 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1003 19:25:59.760303  454580 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1003 19:25:59.779405  454580 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1003 19:25:59.910096  454580 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1003 19:26:00.057287  454580 docker.go:234] disabling docker service ...
	I1003 19:26:00.057364  454580 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1003 19:26:00.101800  454580 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1003 19:26:00.120548  454580 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1003 19:26:00.359493  454580 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1003 19:26:00.513222  454580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1003 19:26:00.528783  454580 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 19:26:00.544824  454580 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1003 19:26:00.544920  454580 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:26:00.554512  454580 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1003 19:26:00.554583  454580 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:26:00.564211  454580 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:26:00.573228  454580 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:26:00.582012  454580 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 19:26:00.590364  454580 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:26:00.599344  454580 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:26:00.612579  454580 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:26:00.622182  454580 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 19:26:00.630261  454580 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 19:26:00.637849  454580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 19:26:00.759932  454580 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1003 19:26:00.895938  454580 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1003 19:26:00.896060  454580 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1003 19:26:00.899748  454580 start.go:563] Will wait 60s for crictl version
	I1003 19:26:00.899851  454580 ssh_runner.go:195] Run: which crictl
	I1003 19:26:00.903366  454580 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1003 19:26:00.927598  454580 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1003 19:26:00.927690  454580 ssh_runner.go:195] Run: crio --version
	I1003 19:26:00.956507  454580 ssh_runner.go:195] Run: crio --version
	I1003 19:26:00.987017  454580 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1003 19:26:00.989778  454580 cli_runner.go:164] Run: docker network inspect force-systemd-env-159095 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 19:26:01.006137  454580 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1003 19:26:01.010572  454580 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 19:26:01.020373  454580 kubeadm.go:883] updating cluster {Name:force-systemd-env-159095 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-159095 Namespace:default APIServerHAVIP: APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSo
ck: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1003 19:26:01.020502  454580 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 19:26:01.020562  454580 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 19:26:01.052628  454580 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 19:26:01.052653  454580 crio.go:433] Images already preloaded, skipping extraction
	I1003 19:26:01.052708  454580 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 19:26:01.094457  454580 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 19:26:01.094480  454580 cache_images.go:85] Images are preloaded, skipping loading
	I1003 19:26:01.094488  454580 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1003 19:26:01.094571  454580 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=force-systemd-env-159095 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-159095 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1003 19:26:01.094654  454580 ssh_runner.go:195] Run: crio config
	I1003 19:26:01.193577  454580 cni.go:84] Creating CNI manager for ""
	I1003 19:26:01.193603  454580 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 19:26:01.193624  454580 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1003 19:26:01.193648  454580 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-env-159095 NodeName:force-systemd-env-159095 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1003 19:26:01.193786  454580 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-env-159095"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 19:26:01.193870  454580 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1003 19:26:01.203259  454580 binaries.go:44] Found k8s binaries, skipping transfer
	I1003 19:26:01.203361  454580 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1003 19:26:01.212693  454580 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1003 19:26:01.227330  454580 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 19:26:01.241438  454580 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2220 bytes)
	I1003 19:26:01.255108  454580 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1003 19:26:01.259043  454580 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 19:26:01.269128  454580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 19:26:01.394659  454580 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 19:26:01.411684  454580 certs.go:69] Setting up /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-env-159095 for IP: 192.168.85.2
	I1003 19:26:01.411706  454580 certs.go:195] generating shared ca certs ...
	I1003 19:26:01.411723  454580 certs.go:227] acquiring lock for ca certs: {Name:mk5a10e6c921326e9c211447576eaeb893259ba7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:26:01.411941  454580 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21625-284583/.minikube/ca.key
	I1003 19:26:01.412027  454580 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.key
	I1003 19:26:01.412041  454580 certs.go:257] generating profile certs ...
	I1003 19:26:01.412117  454580 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-env-159095/client.key
	I1003 19:26:01.412166  454580 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-env-159095/client.crt with IP's: []
	I1003 19:26:01.766820  454580 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-env-159095/client.crt ...
	I1003 19:26:01.766856  454580 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-env-159095/client.crt: {Name:mk51666518138b5a2e219819702e236e76872a78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:26:01.767096  454580 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-env-159095/client.key ...
	I1003 19:26:01.767116  454580 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-env-159095/client.key: {Name:mkfcf481461d38e104e159039c71e04647b08ed2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:26:01.767219  454580 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-env-159095/apiserver.key.273a6662
	I1003 19:26:01.767240  454580 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-env-159095/apiserver.crt.273a6662 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1003 19:26:02.216829  454580 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-env-159095/apiserver.crt.273a6662 ...
	I1003 19:26:02.216863  454580 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-env-159095/apiserver.crt.273a6662: {Name:mk1f4f655e70df952f523e5ea19eff6145f62906 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:26:02.217058  454580 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-env-159095/apiserver.key.273a6662 ...
	I1003 19:26:02.217072  454580 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-env-159095/apiserver.key.273a6662: {Name:mk8d83f54e21a45bfbebd7f368a1696954444530 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:26:02.217158  454580 certs.go:382] copying /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-env-159095/apiserver.crt.273a6662 -> /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-env-159095/apiserver.crt
	I1003 19:26:02.217239  454580 certs.go:386] copying /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-env-159095/apiserver.key.273a6662 -> /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-env-159095/apiserver.key
	I1003 19:26:02.217303  454580 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-env-159095/proxy-client.key
	I1003 19:26:02.217321  454580 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-env-159095/proxy-client.crt with IP's: []
	I1003 19:26:03.286918  454580 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-env-159095/proxy-client.crt ...
	I1003 19:26:03.286950  454580 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-env-159095/proxy-client.crt: {Name:mk4936fc3ff95065d787c9e24dc27c1b043c8db0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:26:03.287147  454580 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-env-159095/proxy-client.key ...
	I1003 19:26:03.287161  454580 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-env-159095/proxy-client.key: {Name:mke42ad861862d33b4d28d4bb87c7f88e4ef1b0b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:26:03.287252  454580 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-284583/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1003 19:26:03.287278  454580 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-284583/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1003 19:26:03.287290  454580 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1003 19:26:03.287301  454580 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1003 19:26:03.287312  454580 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-env-159095/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1003 19:26:03.287328  454580 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-env-159095/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1003 19:26:03.287344  454580 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-env-159095/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1003 19:26:03.287361  454580 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-env-159095/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1003 19:26:03.287415  454580 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/286434.pem (1338 bytes)
	W1003 19:26:03.287455  454580 certs.go:480] ignoring /home/jenkins/minikube-integration/21625-284583/.minikube/certs/286434_empty.pem, impossibly tiny 0 bytes
	I1003 19:26:03.287467  454580 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 19:26:03.287491  454580 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem (1082 bytes)
	I1003 19:26:03.287519  454580 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem (1123 bytes)
	I1003 19:26:03.287546  454580 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/key.pem (1675 bytes)
	I1003 19:26:03.287587  454580 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem (1708 bytes)
	I1003 19:26:03.287618  454580 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/286434.pem -> /usr/share/ca-certificates/286434.pem
	I1003 19:26:03.287634  454580 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem -> /usr/share/ca-certificates/2864342.pem
	I1003 19:26:03.287646  454580 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21625-284583/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1003 19:26:03.288259  454580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 19:26:03.307697  454580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1003 19:26:03.325577  454580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 19:26:03.343492  454580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1003 19:26:03.360867  454580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-env-159095/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1003 19:26:03.378948  454580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-env-159095/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1003 19:26:03.396233  454580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-env-159095/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 19:26:03.413528  454580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/force-systemd-env-159095/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1003 19:26:03.430564  454580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/certs/286434.pem --> /usr/share/ca-certificates/286434.pem (1338 bytes)
	I1003 19:26:03.447794  454580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem --> /usr/share/ca-certificates/2864342.pem (1708 bytes)
	I1003 19:26:03.464647  454580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 19:26:03.482186  454580 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 19:26:03.495617  454580 ssh_runner.go:195] Run: openssl version
	I1003 19:26:03.502334  454580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/286434.pem && ln -fs /usr/share/ca-certificates/286434.pem /etc/ssl/certs/286434.pem"
	I1003 19:26:03.510842  454580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/286434.pem
	I1003 19:26:03.514769  454580 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  3 18:34 /usr/share/ca-certificates/286434.pem
	I1003 19:26:03.514860  454580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/286434.pem
	I1003 19:26:03.556255  454580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/286434.pem /etc/ssl/certs/51391683.0"
	I1003 19:26:03.564677  454580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2864342.pem && ln -fs /usr/share/ca-certificates/2864342.pem /etc/ssl/certs/2864342.pem"
	I1003 19:26:03.573088  454580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2864342.pem
	I1003 19:26:03.576763  454580 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  3 18:34 /usr/share/ca-certificates/2864342.pem
	I1003 19:26:03.576835  454580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2864342.pem
	I1003 19:26:03.619212  454580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2864342.pem /etc/ssl/certs/3ec20f2e.0"
	I1003 19:26:03.627695  454580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 19:26:03.636292  454580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 19:26:03.640517  454580 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  3 18:27 /usr/share/ca-certificates/minikubeCA.pem
	I1003 19:26:03.640630  454580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 19:26:03.682789  454580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 19:26:03.691160  454580 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 19:26:03.694525  454580 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1003 19:26:03.694580  454580 kubeadm.go:400] StartCluster: {Name:force-systemd-env-159095 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-159095 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 19:26:03.694659  454580 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1003 19:26:03.694722  454580 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1003 19:26:03.721506  454580 cri.go:89] found id: ""
	I1003 19:26:03.721573  454580 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 19:26:03.729182  454580 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1003 19:26:03.736957  454580 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1003 19:26:03.737070  454580 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 19:26:03.745164  454580 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1003 19:26:03.745188  454580 kubeadm.go:157] found existing configuration files:
	
	I1003 19:26:03.745246  454580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1003 19:26:03.753423  454580 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1003 19:26:03.753494  454580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1003 19:26:03.761502  454580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1003 19:26:03.769271  454580 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1003 19:26:03.769367  454580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1003 19:26:03.776549  454580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1003 19:26:03.784475  454580 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1003 19:26:03.784542  454580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1003 19:26:03.792169  454580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1003 19:26:03.800093  454580 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1003 19:26:03.800214  454580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1003 19:26:03.807653  454580 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1003 19:26:03.846939  454580 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1003 19:26:03.847060  454580 kubeadm.go:318] [preflight] Running pre-flight checks
	I1003 19:26:03.875748  454580 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1003 19:26:03.875864  454580 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1003 19:26:03.875918  454580 kubeadm.go:318] OS: Linux
	I1003 19:26:03.875991  454580 kubeadm.go:318] CGROUPS_CPU: enabled
	I1003 19:26:03.876065  454580 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1003 19:26:03.876135  454580 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1003 19:26:03.876201  454580 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1003 19:26:03.876273  454580 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1003 19:26:03.876344  454580 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1003 19:26:03.876414  454580 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1003 19:26:03.876481  454580 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1003 19:26:03.876553  454580 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1003 19:26:03.943371  454580 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1003 19:26:03.943496  454580 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1003 19:26:03.943598  454580 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1003 19:26:03.954937  454580 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1003 19:26:03.958146  454580 out.go:252]   - Generating certificates and keys ...
	I1003 19:26:03.958243  454580 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1003 19:26:03.958316  454580 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1003 19:26:04.891896  454580 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1003 19:26:05.663439  454580 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1003 19:26:05.979405  454580 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1003 19:26:06.329112  454580 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1003 19:26:07.276225  454580 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1003 19:26:07.276609  454580 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [force-systemd-env-159095 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1003 19:26:07.432801  454580 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1003 19:26:07.433347  454580 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-159095 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1003 19:26:08.086133  454580 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1003 19:26:08.342028  454580 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1003 19:26:08.968895  454580 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1003 19:26:08.969186  454580 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1003 19:26:09.745221  454580 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1003 19:26:09.887350  454580 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1003 19:26:10.478301  454580 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1003 19:26:11.115450  454580 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1003 19:26:11.311431  454580 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1003 19:26:11.312258  454580 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1003 19:26:11.314985  454580 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1003 19:26:11.319587  454580 out.go:252]   - Booting up control plane ...
	I1003 19:26:11.319706  454580 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1003 19:26:11.319794  454580 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1003 19:26:11.319869  454580 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1003 19:26:11.336173  454580 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1003 19:26:11.336463  454580 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1003 19:26:11.343972  454580 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1003 19:26:11.344309  454580 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1003 19:26:11.345049  454580 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1003 19:26:11.474759  454580 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1003 19:26:11.474886  454580 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1003 19:26:12.976413  454580 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501726365s
	I1003 19:26:12.985214  454580 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1003 19:26:12.985316  454580 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1003 19:26:12.985619  454580 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1003 19:26:12.985704  454580 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1003 19:30:12.986550  454580 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.001161121s
	I1003 19:30:12.986675  454580 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000797529s
	I1003 19:30:12.988014  454580 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.001354323s
	I1003 19:30:12.988036  454580 kubeadm.go:318] 
	I1003 19:30:12.988130  454580 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1003 19:30:12.988236  454580 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1003 19:30:12.988383  454580 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1003 19:30:12.988487  454580 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1003 19:30:12.988570  454580 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1003 19:30:12.988657  454580 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1003 19:30:12.988693  454580 kubeadm.go:318] 
	I1003 19:30:12.993483  454580 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1003 19:30:12.993735  454580 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1003 19:30:12.993851  454580 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1003 19:30:12.994434  454580 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1003 19:30:12.994510  454580 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	W1003 19:30:12.994643  454580 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-env-159095 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-159095 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501726365s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001161121s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000797529s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001354323s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [force-systemd-env-159095 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [force-systemd-env-159095 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 1.501726365s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-apiserver is not healthy after 4m0.001161121s
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000797529s
	[control-plane-check] kube-scheduler is not healthy after 4m0.001354323s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-apiserver check failed at https://192.168.85.2:8443/livez: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline, kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	I1003 19:30:12.994727  454580 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1003 19:30:13.535012  454580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 19:30:13.549036  454580 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1003 19:30:13.549101  454580 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 19:30:13.558405  454580 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1003 19:30:13.558421  454580 kubeadm.go:157] found existing configuration files:
	
	I1003 19:30:13.558475  454580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1003 19:30:13.566578  454580 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1003 19:30:13.566637  454580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1003 19:30:13.574418  454580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1003 19:30:13.582218  454580 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1003 19:30:13.582289  454580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1003 19:30:13.589696  454580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1003 19:30:13.597779  454580 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1003 19:30:13.597850  454580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1003 19:30:13.605196  454580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1003 19:30:13.613250  454580 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1003 19:30:13.613310  454580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1003 19:30:13.620867  454580 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1003 19:30:13.684886  454580 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1003 19:30:13.685182  454580 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1003 19:30:13.763851  454580 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1003 19:34:18.347934  454580 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.85.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1003 19:34:18.348034  454580 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1003 19:34:18.352589  454580 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1003 19:34:18.352654  454580 kubeadm.go:318] [preflight] Running pre-flight checks
	I1003 19:34:18.352782  454580 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1003 19:34:18.352848  454580 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1003 19:34:18.352892  454580 kubeadm.go:318] OS: Linux
	I1003 19:34:18.352946  454580 kubeadm.go:318] CGROUPS_CPU: enabled
	I1003 19:34:18.353004  454580 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1003 19:34:18.353067  454580 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1003 19:34:18.353135  454580 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1003 19:34:18.353192  454580 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1003 19:34:18.353254  454580 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1003 19:34:18.353310  454580 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1003 19:34:18.353369  454580 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1003 19:34:18.353423  454580 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1003 19:34:18.353507  454580 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1003 19:34:18.353618  454580 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1003 19:34:18.353727  454580 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1003 19:34:18.353802  454580 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1003 19:34:18.356930  454580 out.go:252]   - Generating certificates and keys ...
	I1003 19:34:18.357029  454580 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1003 19:34:18.357104  454580 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1003 19:34:18.357191  454580 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1003 19:34:18.357263  454580 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1003 19:34:18.357341  454580 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1003 19:34:18.357401  454580 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1003 19:34:18.357471  454580 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1003 19:34:18.357549  454580 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1003 19:34:18.357641  454580 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1003 19:34:18.357724  454580 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1003 19:34:18.357770  454580 kubeadm.go:318] [certs] Using the existing "sa" key
	I1003 19:34:18.357834  454580 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1003 19:34:18.357891  454580 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1003 19:34:18.357954  454580 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1003 19:34:18.358013  454580 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1003 19:34:18.358082  454580 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1003 19:34:18.358143  454580 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1003 19:34:18.358243  454580 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1003 19:34:18.358316  454580 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1003 19:34:18.361254  454580 out.go:252]   - Booting up control plane ...
	I1003 19:34:18.361399  454580 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1003 19:34:18.361527  454580 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1003 19:34:18.361603  454580 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1003 19:34:18.361718  454580 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1003 19:34:18.361821  454580 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1003 19:34:18.361935  454580 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1003 19:34:18.362025  454580 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1003 19:34:18.362068  454580 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1003 19:34:18.362209  454580 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1003 19:34:18.362321  454580 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1003 19:34:18.362392  454580 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 2.000785241s
	I1003 19:34:18.362493  454580 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1003 19:34:18.362582  454580 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1003 19:34:18.362696  454580 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1003 19:34:18.362790  454580 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1003 19:34:18.362880  454580 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000653232s
	I1003 19:34:18.362959  454580 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000759408s
	I1003 19:34:18.363039  454580 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.005894118s
	I1003 19:34:18.363048  454580 kubeadm.go:318] 
	I1003 19:34:18.363150  454580 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1003 19:34:18.363247  454580 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1003 19:34:18.363342  454580 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1003 19:34:18.363443  454580 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1003 19:34:18.363523  454580 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1003 19:34:18.363612  454580 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1003 19:34:18.363621  454580 kubeadm.go:318] 
	I1003 19:34:18.363685  454580 kubeadm.go:402] duration metric: took 8m14.669107964s to StartCluster
	I1003 19:34:18.363724  454580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 19:34:18.363792  454580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 19:34:18.389290  454580 cri.go:89] found id: ""
	I1003 19:34:18.389325  454580 logs.go:282] 0 containers: []
	W1003 19:34:18.389335  454580 logs.go:284] No container was found matching "kube-apiserver"
	I1003 19:34:18.389341  454580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 19:34:18.389399  454580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 19:34:18.415011  454580 cri.go:89] found id: ""
	I1003 19:34:18.415033  454580 logs.go:282] 0 containers: []
	W1003 19:34:18.415041  454580 logs.go:284] No container was found matching "etcd"
	I1003 19:34:18.415047  454580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 19:34:18.415154  454580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 19:34:18.441330  454580 cri.go:89] found id: ""
	I1003 19:34:18.441366  454580 logs.go:282] 0 containers: []
	W1003 19:34:18.441375  454580 logs.go:284] No container was found matching "coredns"
	I1003 19:34:18.441382  454580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 19:34:18.441484  454580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 19:34:18.468767  454580 cri.go:89] found id: ""
	I1003 19:34:18.468794  454580 logs.go:282] 0 containers: []
	W1003 19:34:18.468802  454580 logs.go:284] No container was found matching "kube-scheduler"
	I1003 19:34:18.468809  454580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 19:34:18.468870  454580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 19:34:18.495249  454580 cri.go:89] found id: ""
	I1003 19:34:18.495281  454580 logs.go:282] 0 containers: []
	W1003 19:34:18.495290  454580 logs.go:284] No container was found matching "kube-proxy"
	I1003 19:34:18.495298  454580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 19:34:18.495358  454580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 19:34:18.521388  454580 cri.go:89] found id: ""
	I1003 19:34:18.521420  454580 logs.go:282] 0 containers: []
	W1003 19:34:18.521428  454580 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 19:34:18.521435  454580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 19:34:18.521505  454580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 19:34:18.547437  454580 cri.go:89] found id: ""
	I1003 19:34:18.547479  454580 logs.go:282] 0 containers: []
	W1003 19:34:18.547488  454580 logs.go:284] No container was found matching "kindnet"
	I1003 19:34:18.547498  454580 logs.go:123] Gathering logs for kubelet ...
	I1003 19:34:18.547509  454580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 19:34:18.634059  454580 logs.go:123] Gathering logs for dmesg ...
	I1003 19:34:18.634097  454580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 19:34:18.650724  454580 logs.go:123] Gathering logs for describe nodes ...
	I1003 19:34:18.650751  454580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 19:34:18.721952  454580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 19:34:18.714182    2361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 19:34:18.714727    2361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 19:34:18.715865    2361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 19:34:18.716330    2361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 19:34:18.717796    2361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 19:34:18.714182    2361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 19:34:18.714727    2361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 19:34:18.715865    2361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 19:34:18.716330    2361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 19:34:18.717796    2361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 19:34:18.722018  454580 logs.go:123] Gathering logs for CRI-O ...
	I1003 19:34:18.722047  454580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 19:34:18.796560  454580 logs.go:123] Gathering logs for container status ...
	I1003 19:34:18.796594  454580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1003 19:34:18.829645  454580 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 2.000785241s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000653232s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000759408s
	[control-plane-check] kube-scheduler is not healthy after 4m0.005894118s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.85.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1003 19:34:18.829696  454580 out.go:285] * 
	* 
	W1003 19:34:18.829776  454580 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 2.000785241s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000653232s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000759408s
	[control-plane-check] kube-scheduler is not healthy after 4m0.005894118s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.85.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 2.000785241s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000653232s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000759408s
	[control-plane-check] kube-scheduler is not healthy after 4m0.005894118s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.85.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1003 19:34:18.830053  454580 out.go:285] * 
	* 
	W1003 19:34:18.832321  454580 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 19:34:18.838675  454580 out.go:203] 
	W1003 19:34:18.841589  454580 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 2.000785241s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000653232s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000759408s
	[control-plane-check] kube-scheduler is not healthy after 4m0.005894118s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.85.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 2.000785241s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000653232s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000759408s
	[control-plane-check] kube-scheduler is not healthy after 4m0.005894118s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.85.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1003 19:34:18.841618  454580 out.go:285] * 
	* 
	I1003 19:34:18.844712  454580 out.go:203] 

                                                
                                                
** /stderr **
docker_test.go:157: failed to start minikube with args: "out/minikube-linux-arm64 start -p force-systemd-env-159095 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio" : exit status 80
docker_test.go:166: *** TestForceSystemdEnv FAILED at 2025-10-03 19:34:18.910428013 +0000 UTC m=+4060.175505835
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestForceSystemdEnv]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestForceSystemdEnv]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect force-systemd-env-159095
helpers_test.go:243: (dbg) docker inspect force-systemd-env-159095:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e32ec65ba1a0840323676b16d2bd9fc40fa202a129da33008b255c231fbe1709",
	        "Created": "2025-10-03T19:25:54.236870036Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 454984,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-03T19:25:54.304608652Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/e32ec65ba1a0840323676b16d2bd9fc40fa202a129da33008b255c231fbe1709/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e32ec65ba1a0840323676b16d2bd9fc40fa202a129da33008b255c231fbe1709/hostname",
	        "HostsPath": "/var/lib/docker/containers/e32ec65ba1a0840323676b16d2bd9fc40fa202a129da33008b255c231fbe1709/hosts",
	        "LogPath": "/var/lib/docker/containers/e32ec65ba1a0840323676b16d2bd9fc40fa202a129da33008b255c231fbe1709/e32ec65ba1a0840323676b16d2bd9fc40fa202a129da33008b255c231fbe1709-json.log",
	        "Name": "/force-systemd-env-159095",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "force-systemd-env-159095:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "force-systemd-env-159095",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e32ec65ba1a0840323676b16d2bd9fc40fa202a129da33008b255c231fbe1709",
	                "LowerDir": "/var/lib/docker/overlay2/476312b7ed35e8f88a9cb9288af71892b7be95732ce8e0dfa17d49336575474f-init/diff:/var/lib/docker/overlay2/87b205803817b0b71a214d995ab7e10a92033bbf72d76d6e052f1d21ccecb313/diff",
	                "MergedDir": "/var/lib/docker/overlay2/476312b7ed35e8f88a9cb9288af71892b7be95732ce8e0dfa17d49336575474f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/476312b7ed35e8f88a9cb9288af71892b7be95732ce8e0dfa17d49336575474f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/476312b7ed35e8f88a9cb9288af71892b7be95732ce8e0dfa17d49336575474f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "force-systemd-env-159095",
	                "Source": "/var/lib/docker/volumes/force-systemd-env-159095/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "force-systemd-env-159095",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "force-systemd-env-159095",
	                "name.minikube.sigs.k8s.io": "force-systemd-env-159095",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a7a1d751896518a9fb7789dceb8656e0db3c33d6cc814d5078ceb867538610eb",
	            "SandboxKey": "/var/run/docker/netns/a7a1d7518965",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33403"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33404"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33407"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33405"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33406"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "force-systemd-env-159095": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "62:02:2b:39:ee:73",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e3eb660ee86b4d92ea3b5b2a90b8882f57cab4db21d40b74ae5afb5d38136ca7",
	                    "EndpointID": "62113b9a3f43dd8d31ed19f108026f4cb01f5ff0c3741bcb22e1cdfa58e8e590",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "force-systemd-env-159095",
	                        "e32ec65ba1a0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-env-159095 -n force-systemd-env-159095
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p force-systemd-env-159095 -n force-systemd-env-159095: exit status 6 (307.590519ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1003 19:34:19.239024  461701 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-env-159095" does not appear in /home/jenkins/minikube-integration/21625-284583/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:247: status error: exit status 6 (may be ok)
helpers_test.go:252: <<< TestForceSystemdEnv FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestForceSystemdEnv]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-159095 logs -n 25
helpers_test.go:260: TestForceSystemdEnv logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                    ARGS                                                    │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-388132 sudo cat /etc/kubernetes/kubelet.conf                                                     │ cilium-388132             │ jenkins │ v1.37.0 │ 03 Oct 25 19:25 UTC │                     │
	│ ssh     │ -p cilium-388132 sudo cat /var/lib/kubelet/config.yaml                                                     │ cilium-388132             │ jenkins │ v1.37.0 │ 03 Oct 25 19:25 UTC │                     │
	│ ssh     │ -p cilium-388132 sudo systemctl status docker --all --full --no-pager                                      │ cilium-388132             │ jenkins │ v1.37.0 │ 03 Oct 25 19:25 UTC │                     │
	│ ssh     │ -p cilium-388132 sudo systemctl cat docker --no-pager                                                      │ cilium-388132             │ jenkins │ v1.37.0 │ 03 Oct 25 19:25 UTC │                     │
	│ ssh     │ -p cilium-388132 sudo cat /etc/docker/daemon.json                                                          │ cilium-388132             │ jenkins │ v1.37.0 │ 03 Oct 25 19:25 UTC │                     │
	│ ssh     │ -p cilium-388132 sudo docker system info                                                                   │ cilium-388132             │ jenkins │ v1.37.0 │ 03 Oct 25 19:25 UTC │                     │
	│ ssh     │ -p cilium-388132 sudo systemctl status cri-docker --all --full --no-pager                                  │ cilium-388132             │ jenkins │ v1.37.0 │ 03 Oct 25 19:25 UTC │                     │
	│ ssh     │ -p cilium-388132 sudo systemctl cat cri-docker --no-pager                                                  │ cilium-388132             │ jenkins │ v1.37.0 │ 03 Oct 25 19:25 UTC │                     │
	│ ssh     │ -p cilium-388132 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                             │ cilium-388132             │ jenkins │ v1.37.0 │ 03 Oct 25 19:25 UTC │                     │
	│ ssh     │ -p cilium-388132 sudo cat /usr/lib/systemd/system/cri-docker.service                                       │ cilium-388132             │ jenkins │ v1.37.0 │ 03 Oct 25 19:25 UTC │                     │
	│ ssh     │ -p cilium-388132 sudo cri-dockerd --version                                                                │ cilium-388132             │ jenkins │ v1.37.0 │ 03 Oct 25 19:25 UTC │                     │
	│ ssh     │ -p cilium-388132 sudo systemctl status containerd --all --full --no-pager                                  │ cilium-388132             │ jenkins │ v1.37.0 │ 03 Oct 25 19:25 UTC │                     │
	│ ssh     │ -p cilium-388132 sudo systemctl cat containerd --no-pager                                                  │ cilium-388132             │ jenkins │ v1.37.0 │ 03 Oct 25 19:25 UTC │                     │
	│ ssh     │ -p cilium-388132 sudo cat /lib/systemd/system/containerd.service                                           │ cilium-388132             │ jenkins │ v1.37.0 │ 03 Oct 25 19:25 UTC │                     │
	│ ssh     │ -p cilium-388132 sudo cat /etc/containerd/config.toml                                                      │ cilium-388132             │ jenkins │ v1.37.0 │ 03 Oct 25 19:25 UTC │                     │
	│ ssh     │ -p cilium-388132 sudo containerd config dump                                                               │ cilium-388132             │ jenkins │ v1.37.0 │ 03 Oct 25 19:25 UTC │                     │
	│ ssh     │ -p cilium-388132 sudo systemctl status crio --all --full --no-pager                                        │ cilium-388132             │ jenkins │ v1.37.0 │ 03 Oct 25 19:25 UTC │                     │
	│ ssh     │ -p cilium-388132 sudo systemctl cat crio --no-pager                                                        │ cilium-388132             │ jenkins │ v1.37.0 │ 03 Oct 25 19:25 UTC │                     │
	│ ssh     │ -p cilium-388132 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                              │ cilium-388132             │ jenkins │ v1.37.0 │ 03 Oct 25 19:25 UTC │                     │
	│ ssh     │ -p cilium-388132 sudo crio config                                                                          │ cilium-388132             │ jenkins │ v1.37.0 │ 03 Oct 25 19:25 UTC │                     │
	│ delete  │ -p cilium-388132                                                                                           │ cilium-388132             │ jenkins │ v1.37.0 │ 03 Oct 25 19:25 UTC │ 03 Oct 25 19:25 UTC │
	│ start   │ -p force-systemd-env-159095 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-env-159095  │ jenkins │ v1.37.0 │ 03 Oct 25 19:25 UTC │                     │
	│ ssh     │ force-systemd-flag-855981 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                       │ force-systemd-flag-855981 │ jenkins │ v1.37.0 │ 03 Oct 25 19:32 UTC │ 03 Oct 25 19:32 UTC │
	│ delete  │ -p force-systemd-flag-855981                                                                               │ force-systemd-flag-855981 │ jenkins │ v1.37.0 │ 03 Oct 25 19:32 UTC │ 03 Oct 25 19:32 UTC │
	│ start   │ -p cert-expiration-324520 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio     │ cert-expiration-324520    │ jenkins │ v1.37.0 │ 03 Oct 25 19:32 UTC │ 03 Oct 25 19:33 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/03 19:32:31
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 19:32:31.784299  458984 out.go:360] Setting OutFile to fd 1 ...
	I1003 19:32:31.784408  458984 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 19:32:31.784412  458984 out.go:374] Setting ErrFile to fd 2...
	I1003 19:32:31.784417  458984 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 19:32:31.784682  458984 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-284583/.minikube/bin
	I1003 19:32:31.785164  458984 out.go:368] Setting JSON to false
	I1003 19:32:31.786099  458984 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8103,"bootTime":1759511849,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1003 19:32:31.786158  458984 start.go:140] virtualization:  
	I1003 19:32:31.790030  458984 out.go:179] * [cert-expiration-324520] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1003 19:32:31.794809  458984 out.go:179]   - MINIKUBE_LOCATION=21625
	I1003 19:32:31.794875  458984 notify.go:220] Checking for updates...
	I1003 19:32:31.801682  458984 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 19:32:31.805023  458984 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21625-284583/kubeconfig
	I1003 19:32:31.808440  458984 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-284583/.minikube
	I1003 19:32:31.811679  458984 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1003 19:32:31.814889  458984 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 19:32:31.818582  458984 config.go:182] Loaded profile config "force-systemd-env-159095": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 19:32:31.818683  458984 driver.go:421] Setting default libvirt URI to qemu:///system
	I1003 19:32:31.853461  458984 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1003 19:32:31.853574  458984 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 19:32:31.911587  458984 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-03 19:32:31.901842052 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1003 19:32:31.911691  458984 docker.go:318] overlay module found
	I1003 19:32:31.915071  458984 out.go:179] * Using the docker driver based on user configuration
	I1003 19:32:31.918132  458984 start.go:304] selected driver: docker
	I1003 19:32:31.918142  458984 start.go:924] validating driver "docker" against <nil>
	I1003 19:32:31.918155  458984 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 19:32:31.918903  458984 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 19:32:31.973371  458984 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-03 19:32:31.963735179 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1003 19:32:31.973515  458984 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1003 19:32:31.973732  458984 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1003 19:32:31.976731  458984 out.go:179] * Using Docker driver with root privileges
	I1003 19:32:31.979830  458984 cni.go:84] Creating CNI manager for ""
	I1003 19:32:31.979894  458984 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 19:32:31.979904  458984 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1003 19:32:31.979996  458984 start.go:348] cluster config:
	{Name:cert-expiration-324520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-324520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 19:32:31.983187  458984 out.go:179] * Starting "cert-expiration-324520" primary control-plane node in "cert-expiration-324520" cluster
	I1003 19:32:31.986050  458984 cache.go:123] Beginning downloading kic base image for docker with crio
	I1003 19:32:31.989075  458984 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1003 19:32:31.991926  458984 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 19:32:31.991988  458984 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21625-284583/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1003 19:32:31.991996  458984 cache.go:58] Caching tarball of preloaded images
	I1003 19:32:31.992005  458984 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1003 19:32:31.992079  458984 preload.go:233] Found /home/jenkins/minikube-integration/21625-284583/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1003 19:32:31.992087  458984 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1003 19:32:31.992193  458984 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/cert-expiration-324520/config.json ...
	I1003 19:32:31.992219  458984 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/cert-expiration-324520/config.json: {Name:mk467e70ebb2a6a9d5233cd5630ac80acdf946da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:32:32.014636  458984 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1003 19:32:32.014647  458984 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1003 19:32:32.014660  458984 cache.go:232] Successfully downloaded all kic artifacts
	I1003 19:32:32.014682  458984 start.go:360] acquireMachinesLock for cert-expiration-324520: {Name:mk1f92fbf251ffec500cd5a1ccf89df97f79ff34 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 19:32:32.014786  458984 start.go:364] duration metric: took 90.266µs to acquireMachinesLock for "cert-expiration-324520"
	I1003 19:32:32.014809  458984 start.go:93] Provisioning new machine with config: &{Name:cert-expiration-324520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-324520 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1003 19:32:32.014876  458984 start.go:125] createHost starting for "" (driver="docker")
	I1003 19:32:32.018324  458984 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1003 19:32:32.018565  458984 start.go:159] libmachine.API.Create for "cert-expiration-324520" (driver="docker")
	I1003 19:32:32.018610  458984 client.go:168] LocalClient.Create starting
	I1003 19:32:32.018687  458984 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem
	I1003 19:32:32.018723  458984 main.go:141] libmachine: Decoding PEM data...
	I1003 19:32:32.018738  458984 main.go:141] libmachine: Parsing certificate...
	I1003 19:32:32.018790  458984 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem
	I1003 19:32:32.018807  458984 main.go:141] libmachine: Decoding PEM data...
	I1003 19:32:32.018815  458984 main.go:141] libmachine: Parsing certificate...
	I1003 19:32:32.019194  458984 cli_runner.go:164] Run: docker network inspect cert-expiration-324520 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1003 19:32:32.036154  458984 cli_runner.go:211] docker network inspect cert-expiration-324520 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1003 19:32:32.036240  458984 network_create.go:284] running [docker network inspect cert-expiration-324520] to gather additional debugging logs...
	I1003 19:32:32.036286  458984 cli_runner.go:164] Run: docker network inspect cert-expiration-324520
	W1003 19:32:32.053106  458984 cli_runner.go:211] docker network inspect cert-expiration-324520 returned with exit code 1
	I1003 19:32:32.053138  458984 network_create.go:287] error running [docker network inspect cert-expiration-324520]: docker network inspect cert-expiration-324520: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network cert-expiration-324520 not found
	I1003 19:32:32.053151  458984 network_create.go:289] output of [docker network inspect cert-expiration-324520]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network cert-expiration-324520 not found
	
	** /stderr **
	I1003 19:32:32.053292  458984 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 19:32:32.069738  458984 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-3a8a28910ba8 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6e:7a:d0:f8:54:63} reservation:<nil>}
	I1003 19:32:32.070134  458984 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-157403cbb468 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:8a:ee:cb:12:bf:d0} reservation:<nil>}
	I1003 19:32:32.070347  458984 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8d1e24f7a986 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:9e:1b:b1:d8:1a:13} reservation:<nil>}
	I1003 19:32:32.070775  458984 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a093d0}
	I1003 19:32:32.070790  458984 network_create.go:124] attempt to create docker network cert-expiration-324520 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1003 19:32:32.070848  458984 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=cert-expiration-324520 cert-expiration-324520
	I1003 19:32:32.135796  458984 network_create.go:108] docker network cert-expiration-324520 192.168.76.0/24 created
	I1003 19:32:32.135815  458984 kic.go:121] calculated static IP "192.168.76.2" for the "cert-expiration-324520" container
	I1003 19:32:32.135890  458984 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1003 19:32:32.154112  458984 cli_runner.go:164] Run: docker volume create cert-expiration-324520 --label name.minikube.sigs.k8s.io=cert-expiration-324520 --label created_by.minikube.sigs.k8s.io=true
	I1003 19:32:32.172099  458984 oci.go:103] Successfully created a docker volume cert-expiration-324520
	I1003 19:32:32.172174  458984 cli_runner.go:164] Run: docker run --rm --name cert-expiration-324520-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-expiration-324520 --entrypoint /usr/bin/test -v cert-expiration-324520:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1003 19:32:32.678595  458984 oci.go:107] Successfully prepared a docker volume cert-expiration-324520
	I1003 19:32:32.678642  458984 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 19:32:32.678660  458984 kic.go:194] Starting extracting preloaded images to volume ...
	I1003 19:32:32.678727  458984 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21625-284583/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v cert-expiration-324520:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1003 19:32:37.137544  458984 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21625-284583/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v cert-expiration-324520:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.458781228s)
	I1003 19:32:37.137564  458984 kic.go:203] duration metric: took 4.458901395s to extract preloaded images to volume ...
	W1003 19:32:37.137712  458984 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1003 19:32:37.137827  458984 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1003 19:32:37.192306  458984 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname cert-expiration-324520 --name cert-expiration-324520 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=cert-expiration-324520 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=cert-expiration-324520 --network cert-expiration-324520 --ip 192.168.76.2 --volume cert-expiration-324520:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1003 19:32:37.480016  458984 cli_runner.go:164] Run: docker container inspect cert-expiration-324520 --format={{.State.Running}}
	I1003 19:32:37.505650  458984 cli_runner.go:164] Run: docker container inspect cert-expiration-324520 --format={{.State.Status}}
	I1003 19:32:37.531679  458984 cli_runner.go:164] Run: docker exec cert-expiration-324520 stat /var/lib/dpkg/alternatives/iptables
	I1003 19:32:37.585921  458984 oci.go:144] the created container "cert-expiration-324520" has a running status.
	I1003 19:32:37.585954  458984 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21625-284583/.minikube/machines/cert-expiration-324520/id_rsa...
	I1003 19:32:37.971722  458984 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21625-284583/.minikube/machines/cert-expiration-324520/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1003 19:32:37.994051  458984 cli_runner.go:164] Run: docker container inspect cert-expiration-324520 --format={{.State.Status}}
	I1003 19:32:38.037774  458984 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1003 19:32:38.037785  458984 kic_runner.go:114] Args: [docker exec --privileged cert-expiration-324520 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1003 19:32:38.103863  458984 cli_runner.go:164] Run: docker container inspect cert-expiration-324520 --format={{.State.Status}}
	I1003 19:32:38.137111  458984 machine.go:93] provisionDockerMachine start ...
	I1003 19:32:38.137205  458984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-324520
	I1003 19:32:38.162848  458984 main.go:141] libmachine: Using SSH client type: native
	I1003 19:32:38.163170  458984 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33408 <nil> <nil>}
	I1003 19:32:38.163178  458984 main.go:141] libmachine: About to run SSH command:
	hostname
	I1003 19:32:38.163838  458984 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1003 19:32:41.296265  458984 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-324520
	
	I1003 19:32:41.296280  458984 ubuntu.go:182] provisioning hostname "cert-expiration-324520"
	I1003 19:32:41.296353  458984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-324520
	I1003 19:32:41.313513  458984 main.go:141] libmachine: Using SSH client type: native
	I1003 19:32:41.313807  458984 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33408 <nil> <nil>}
	I1003 19:32:41.313815  458984 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-expiration-324520 && echo "cert-expiration-324520" | sudo tee /etc/hostname
	I1003 19:32:41.458009  458984 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-324520
	
	I1003 19:32:41.458093  458984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-324520
	I1003 19:32:41.475102  458984 main.go:141] libmachine: Using SSH client type: native
	I1003 19:32:41.475401  458984 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33408 <nil> <nil>}
	I1003 19:32:41.475416  458984 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-324520' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-324520/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-324520' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 19:32:41.604848  458984 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 19:32:41.604862  458984 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21625-284583/.minikube CaCertPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21625-284583/.minikube}
	I1003 19:32:41.604889  458984 ubuntu.go:190] setting up certificates
	I1003 19:32:41.604898  458984 provision.go:84] configureAuth start
	I1003 19:32:41.604959  458984 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-324520
	I1003 19:32:41.621108  458984 provision.go:143] copyHostCerts
	I1003 19:32:41.621177  458984 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-284583/.minikube/ca.pem, removing ...
	I1003 19:32:41.621185  458984 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-284583/.minikube/ca.pem
	I1003 19:32:41.621266  458984 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21625-284583/.minikube/ca.pem (1082 bytes)
	I1003 19:32:41.621369  458984 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-284583/.minikube/cert.pem, removing ...
	I1003 19:32:41.621372  458984 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-284583/.minikube/cert.pem
	I1003 19:32:41.621398  458984 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21625-284583/.minikube/cert.pem (1123 bytes)
	I1003 19:32:41.621460  458984 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-284583/.minikube/key.pem, removing ...
	I1003 19:32:41.621463  458984 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-284583/.minikube/key.pem
	I1003 19:32:41.621486  458984 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21625-284583/.minikube/key.pem (1675 bytes)
	I1003 19:32:41.621540  458984 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21625-284583/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-324520 san=[127.0.0.1 192.168.76.2 cert-expiration-324520 localhost minikube]
	I1003 19:32:41.825618  458984 provision.go:177] copyRemoteCerts
	I1003 19:32:41.825678  458984 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 19:32:41.825717  458984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-324520
	I1003 19:32:41.848502  458984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/cert-expiration-324520/id_rsa Username:docker}
	I1003 19:32:41.944521  458984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 19:32:41.962723  458984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1003 19:32:41.979725  458984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1003 19:32:41.997347  458984 provision.go:87] duration metric: took 392.421378ms to configureAuth
	I1003 19:32:41.997366  458984 ubuntu.go:206] setting minikube options for container-runtime
	I1003 19:32:41.997544  458984 config.go:182] Loaded profile config "cert-expiration-324520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 19:32:41.997657  458984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-324520
	I1003 19:32:42.016300  458984 main.go:141] libmachine: Using SSH client type: native
	I1003 19:32:42.016709  458984 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33408 <nil> <nil>}
	I1003 19:32:42.016763  458984 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1003 19:32:42.279535  458984 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1003 19:32:42.279549  458984 machine.go:96] duration metric: took 4.142426158s to provisionDockerMachine
	I1003 19:32:42.279557  458984 client.go:171] duration metric: took 10.260941624s to LocalClient.Create
	I1003 19:32:42.279568  458984 start.go:167] duration metric: took 10.261004944s to libmachine.API.Create "cert-expiration-324520"
	I1003 19:32:42.279574  458984 start.go:293] postStartSetup for "cert-expiration-324520" (driver="docker")
	I1003 19:32:42.279584  458984 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 19:32:42.279676  458984 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 19:32:42.279718  458984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-324520
	I1003 19:32:42.298385  458984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/cert-expiration-324520/id_rsa Username:docker}
	I1003 19:32:42.396625  458984 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 19:32:42.399638  458984 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1003 19:32:42.399655  458984 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1003 19:32:42.399664  458984 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-284583/.minikube/addons for local assets ...
	I1003 19:32:42.399721  458984 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-284583/.minikube/files for local assets ...
	I1003 19:32:42.399799  458984 filesync.go:149] local asset: /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem -> 2864342.pem in /etc/ssl/certs
	I1003 19:32:42.399898  458984 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 19:32:42.406970  458984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem --> /etc/ssl/certs/2864342.pem (1708 bytes)
	I1003 19:32:42.423943  458984 start.go:296] duration metric: took 144.355384ms for postStartSetup
	I1003 19:32:42.424355  458984 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-324520
	I1003 19:32:42.440804  458984 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/cert-expiration-324520/config.json ...
	I1003 19:32:42.441072  458984 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 19:32:42.441113  458984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-324520
	I1003 19:32:42.457672  458984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/cert-expiration-324520/id_rsa Username:docker}
	I1003 19:32:42.549780  458984 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1003 19:32:42.555025  458984 start.go:128] duration metric: took 10.540135626s to createHost
	I1003 19:32:42.555039  458984 start.go:83] releasing machines lock for "cert-expiration-324520", held for 10.540246356s
	I1003 19:32:42.555120  458984 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-324520
	I1003 19:32:42.571619  458984 ssh_runner.go:195] Run: cat /version.json
	I1003 19:32:42.571663  458984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-324520
	I1003 19:32:42.571900  458984 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 19:32:42.571959  458984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-324520
	I1003 19:32:42.594174  458984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/cert-expiration-324520/id_rsa Username:docker}
	I1003 19:32:42.604581  458984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/cert-expiration-324520/id_rsa Username:docker}
	I1003 19:32:42.688271  458984 ssh_runner.go:195] Run: systemctl --version
	I1003 19:32:42.779395  458984 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1003 19:32:42.815002  458984 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1003 19:32:42.819427  458984 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 19:32:42.819492  458984 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 19:32:42.848339  458984 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1003 19:32:42.848351  458984 start.go:495] detecting cgroup driver to use...
	I1003 19:32:42.848383  458984 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1003 19:32:42.848433  458984 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 19:32:42.864841  458984 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 19:32:42.877957  458984 docker.go:218] disabling cri-docker service (if available) ...
	I1003 19:32:42.878009  458984 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1003 19:32:42.894905  458984 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1003 19:32:42.913626  458984 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1003 19:32:43.031739  458984 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1003 19:32:43.176894  458984 docker.go:234] disabling docker service ...
	I1003 19:32:43.176948  458984 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1003 19:32:43.201378  458984 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1003 19:32:43.216091  458984 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1003 19:32:43.327590  458984 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1003 19:32:43.441066  458984 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1003 19:32:43.454071  458984 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 19:32:43.468414  458984 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1003 19:32:43.468468  458984 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:32:43.477261  458984 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1003 19:32:43.477339  458984 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:32:43.486967  458984 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:32:43.495562  458984 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:32:43.504477  458984 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 19:32:43.513246  458984 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:32:43.522106  458984 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:32:43.535730  458984 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:32:43.544693  458984 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 19:32:43.552126  458984 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 19:32:43.560513  458984 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 19:32:43.684201  458984 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1003 19:32:43.824067  458984 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1003 19:32:43.824137  458984 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1003 19:32:43.828267  458984 start.go:563] Will wait 60s for crictl version
	I1003 19:32:43.828321  458984 ssh_runner.go:195] Run: which crictl
	I1003 19:32:43.832093  458984 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1003 19:32:43.859984  458984 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1003 19:32:43.860064  458984 ssh_runner.go:195] Run: crio --version
	I1003 19:32:43.888895  458984 ssh_runner.go:195] Run: crio --version
	I1003 19:32:43.919757  458984 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1003 19:32:43.922581  458984 cli_runner.go:164] Run: docker network inspect cert-expiration-324520 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 19:32:43.939837  458984 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1003 19:32:43.943771  458984 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 19:32:43.953338  458984 kubeadm.go:883] updating cluster {Name:cert-expiration-324520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-324520 Namespace:default APIServerHAVIP: APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAge
ntPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1003 19:32:43.953444  458984 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 19:32:43.953497  458984 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 19:32:43.990665  458984 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 19:32:43.990676  458984 crio.go:433] Images already preloaded, skipping extraction
	I1003 19:32:43.990732  458984 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 19:32:44.028941  458984 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 19:32:44.028954  458984 cache_images.go:85] Images are preloaded, skipping loading
	I1003 19:32:44.028961  458984 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1003 19:32:44.029082  458984 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=cert-expiration-324520 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-324520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1003 19:32:44.029165  458984 ssh_runner.go:195] Run: crio config
	I1003 19:32:44.080380  458984 cni.go:84] Creating CNI manager for ""
	I1003 19:32:44.080391  458984 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 19:32:44.080408  458984 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1003 19:32:44.080429  458984 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-324520 NodeName:cert-expiration-324520 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1003 19:32:44.080575  458984 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "cert-expiration-324520"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 19:32:44.080691  458984 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1003 19:32:44.088927  458984 binaries.go:44] Found k8s binaries, skipping transfer
	I1003 19:32:44.088988  458984 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1003 19:32:44.097344  458984 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1003 19:32:44.110599  458984 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 19:32:44.123826  458984 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1003 19:32:44.136482  458984 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1003 19:32:44.139984  458984 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 19:32:44.149362  458984 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 19:32:44.255486  458984 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 19:32:44.270735  458984 certs.go:69] Setting up /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/cert-expiration-324520 for IP: 192.168.76.2
	I1003 19:32:44.270746  458984 certs.go:195] generating shared ca certs ...
	I1003 19:32:44.270760  458984 certs.go:227] acquiring lock for ca certs: {Name:mk5a10e6c921326e9c211447576eaeb893259ba7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:32:44.270926  458984 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21625-284583/.minikube/ca.key
	I1003 19:32:44.270977  458984 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.key
	I1003 19:32:44.270983  458984 certs.go:257] generating profile certs ...
	I1003 19:32:44.271049  458984 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/cert-expiration-324520/client.key
	I1003 19:32:44.271067  458984 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/cert-expiration-324520/client.crt with IP's: []
	I1003 19:32:45.051143  458984 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/cert-expiration-324520/client.crt ...
	I1003 19:32:45.051167  458984 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/cert-expiration-324520/client.crt: {Name:mk2b9b4a6c3ea836978cddbd883877a629c23ee1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:32:45.051422  458984 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/cert-expiration-324520/client.key ...
	I1003 19:32:45.051432  458984 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/cert-expiration-324520/client.key: {Name:mkc9a91b3db18452b751f96097ad203733867b5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:32:45.051522  458984 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/cert-expiration-324520/apiserver.key.8ab1f55d
	I1003 19:32:45.051538  458984 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/cert-expiration-324520/apiserver.crt.8ab1f55d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1003 19:32:45.484306  458984 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/cert-expiration-324520/apiserver.crt.8ab1f55d ...
	I1003 19:32:45.484323  458984 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/cert-expiration-324520/apiserver.crt.8ab1f55d: {Name:mkc5fc44830972487e302e8cc2655e917911cb52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:32:45.484525  458984 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/cert-expiration-324520/apiserver.key.8ab1f55d ...
	I1003 19:32:45.484534  458984 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/cert-expiration-324520/apiserver.key.8ab1f55d: {Name:mkafd1e81ec471856ea23c9448fc66093bb0dec6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:32:45.484617  458984 certs.go:382] copying /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/cert-expiration-324520/apiserver.crt.8ab1f55d -> /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/cert-expiration-324520/apiserver.crt
	I1003 19:32:45.484693  458984 certs.go:386] copying /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/cert-expiration-324520/apiserver.key.8ab1f55d -> /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/cert-expiration-324520/apiserver.key
	I1003 19:32:45.484776  458984 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/cert-expiration-324520/proxy-client.key
	I1003 19:32:45.484788  458984 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/cert-expiration-324520/proxy-client.crt with IP's: []
	I1003 19:32:45.593962  458984 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/cert-expiration-324520/proxy-client.crt ...
	I1003 19:32:45.593976  458984 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/cert-expiration-324520/proxy-client.crt: {Name:mk5a1747fe0bee7c76f703c3ee28022e2b30c832 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:32:45.594154  458984 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/cert-expiration-324520/proxy-client.key ...
	I1003 19:32:45.594161  458984 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/cert-expiration-324520/proxy-client.key: {Name:mk644d8ba8416cc626999df111afde470c7b9f2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:32:45.594353  458984 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/286434.pem (1338 bytes)
	W1003 19:32:45.594389  458984 certs.go:480] ignoring /home/jenkins/minikube-integration/21625-284583/.minikube/certs/286434_empty.pem, impossibly tiny 0 bytes
	I1003 19:32:45.594397  458984 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 19:32:45.594422  458984 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem (1082 bytes)
	I1003 19:32:45.594445  458984 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem (1123 bytes)
	I1003 19:32:45.594468  458984 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/key.pem (1675 bytes)
	I1003 19:32:45.594507  458984 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem (1708 bytes)
	I1003 19:32:45.595115  458984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 19:32:45.613949  458984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1003 19:32:45.631939  458984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 19:32:45.650091  458984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1003 19:32:45.668055  458984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/cert-expiration-324520/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1003 19:32:45.685829  458984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/cert-expiration-324520/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1003 19:32:45.703550  458984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/cert-expiration-324520/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 19:32:45.721105  458984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/cert-expiration-324520/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1003 19:32:45.738401  458984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem --> /usr/share/ca-certificates/2864342.pem (1708 bytes)
	I1003 19:32:45.756332  458984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 19:32:45.774015  458984 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/certs/286434.pem --> /usr/share/ca-certificates/286434.pem (1338 bytes)
	I1003 19:32:45.792191  458984 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 19:32:45.805531  458984 ssh_runner.go:195] Run: openssl version
	I1003 19:32:45.811787  458984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2864342.pem && ln -fs /usr/share/ca-certificates/2864342.pem /etc/ssl/certs/2864342.pem"
	I1003 19:32:45.820466  458984 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2864342.pem
	I1003 19:32:45.824036  458984 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  3 18:34 /usr/share/ca-certificates/2864342.pem
	I1003 19:32:45.824096  458984 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2864342.pem
	I1003 19:32:45.866270  458984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2864342.pem /etc/ssl/certs/3ec20f2e.0"
	I1003 19:32:45.874534  458984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 19:32:45.882838  458984 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 19:32:45.886787  458984 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  3 18:27 /usr/share/ca-certificates/minikubeCA.pem
	I1003 19:32:45.886839  458984 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 19:32:45.928115  458984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 19:32:45.936548  458984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/286434.pem && ln -fs /usr/share/ca-certificates/286434.pem /etc/ssl/certs/286434.pem"
	I1003 19:32:45.945111  458984 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/286434.pem
	I1003 19:32:45.949038  458984 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  3 18:34 /usr/share/ca-certificates/286434.pem
	I1003 19:32:45.949092  458984 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/286434.pem
	I1003 19:32:45.990583  458984 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/286434.pem /etc/ssl/certs/51391683.0"
	I1003 19:32:45.999060  458984 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 19:32:46.002497  458984 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1003 19:32:46.002544  458984 kubeadm.go:400] StartCluster: {Name:cert-expiration-324520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-324520 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 19:32:46.002609  458984 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1003 19:32:46.002671  458984 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1003 19:32:46.034265  458984 cri.go:89] found id: ""
	I1003 19:32:46.034345  458984 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 19:32:46.042498  458984 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1003 19:32:46.050897  458984 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1003 19:32:46.050956  458984 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 19:32:46.059459  458984 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1003 19:32:46.059469  458984 kubeadm.go:157] found existing configuration files:
	
	I1003 19:32:46.059529  458984 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1003 19:32:46.067713  458984 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1003 19:32:46.067789  458984 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1003 19:32:46.075615  458984 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1003 19:32:46.083612  458984 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1003 19:32:46.083681  458984 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1003 19:32:46.091330  458984 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1003 19:32:46.099228  458984 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1003 19:32:46.099287  458984 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1003 19:32:46.106819  458984 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1003 19:32:46.114839  458984 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1003 19:32:46.114904  458984 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1003 19:32:46.122474  458984 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1003 19:32:46.164320  458984 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1003 19:32:46.164372  458984 kubeadm.go:318] [preflight] Running pre-flight checks
	I1003 19:32:46.191349  458984 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1003 19:32:46.191415  458984 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1003 19:32:46.191449  458984 kubeadm.go:318] OS: Linux
	I1003 19:32:46.191496  458984 kubeadm.go:318] CGROUPS_CPU: enabled
	I1003 19:32:46.191545  458984 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1003 19:32:46.191593  458984 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1003 19:32:46.191642  458984 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1003 19:32:46.191690  458984 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1003 19:32:46.191740  458984 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1003 19:32:46.191786  458984 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1003 19:32:46.191835  458984 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1003 19:32:46.191882  458984 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1003 19:32:46.261204  458984 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1003 19:32:46.261341  458984 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1003 19:32:46.261437  458984 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1003 19:32:46.269384  458984 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1003 19:32:46.276204  458984 out.go:252]   - Generating certificates and keys ...
	I1003 19:32:46.276317  458984 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1003 19:32:46.276395  458984 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1003 19:32:46.427797  458984 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1003 19:32:46.927602  458984 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1003 19:32:47.529946  458984 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1003 19:32:47.687968  458984 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1003 19:32:48.631788  458984 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1003 19:32:48.632111  458984 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [cert-expiration-324520 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1003 19:32:49.157777  458984 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1003 19:32:49.158056  458984 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [cert-expiration-324520 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1003 19:32:49.659305  458984 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1003 19:32:50.411674  458984 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1003 19:32:50.680979  458984 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1003 19:32:50.681228  458984 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1003 19:32:51.258203  458984 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1003 19:32:51.522656  458984 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1003 19:32:52.115670  458984 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1003 19:32:52.443319  458984 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1003 19:32:52.875291  458984 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1003 19:32:52.875931  458984 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1003 19:32:52.878537  458984 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1003 19:32:52.882700  458984 out.go:252]   - Booting up control plane ...
	I1003 19:32:52.882801  458984 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1003 19:32:52.882880  458984 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1003 19:32:52.882948  458984 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1003 19:32:52.897914  458984 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1003 19:32:52.898188  458984 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1003 19:32:52.906582  458984 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1003 19:32:52.906677  458984 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1003 19:32:52.906718  458984 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1003 19:32:53.049641  458984 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1003 19:32:53.049759  458984 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1003 19:32:54.050275  458984 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.00090837s
	I1003 19:32:54.054194  458984 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1003 19:32:54.054287  458984 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1003 19:32:54.054617  458984 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1003 19:32:54.054701  458984 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1003 19:32:58.835156  458984 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.780937138s
	I1003 19:32:59.077210  458984 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 5.022637297s
	I1003 19:33:00.061826  458984 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.002031059s
	I1003 19:33:00.128683  458984 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1003 19:33:00.193149  458984 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1003 19:33:00.277482  458984 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1003 19:33:00.277986  458984 kubeadm.go:318] [mark-control-plane] Marking the node cert-expiration-324520 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1003 19:33:00.331589  458984 kubeadm.go:318] [bootstrap-token] Using token: wsdn3t.imc3irb8d74wlxum
	I1003 19:33:00.334588  458984 out.go:252]   - Configuring RBAC rules ...
	I1003 19:33:00.334719  458984 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1003 19:33:00.351669  458984 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1003 19:33:00.380751  458984 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1003 19:33:00.390035  458984 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1003 19:33:00.399598  458984 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1003 19:33:00.407225  458984 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1003 19:33:00.483636  458984 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1003 19:33:00.921571  458984 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1003 19:33:01.477436  458984 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1003 19:33:01.478542  458984 kubeadm.go:318] 
	I1003 19:33:01.478611  458984 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1003 19:33:01.478616  458984 kubeadm.go:318] 
	I1003 19:33:01.478695  458984 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1003 19:33:01.478711  458984 kubeadm.go:318] 
	I1003 19:33:01.478737  458984 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1003 19:33:01.478797  458984 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1003 19:33:01.478848  458984 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1003 19:33:01.478851  458984 kubeadm.go:318] 
	I1003 19:33:01.478906  458984 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1003 19:33:01.478909  458984 kubeadm.go:318] 
	I1003 19:33:01.478957  458984 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1003 19:33:01.478961  458984 kubeadm.go:318] 
	I1003 19:33:01.479014  458984 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1003 19:33:01.479091  458984 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1003 19:33:01.479161  458984 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1003 19:33:01.479164  458984 kubeadm.go:318] 
	I1003 19:33:01.479250  458984 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1003 19:33:01.479330  458984 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1003 19:33:01.479333  458984 kubeadm.go:318] 
	I1003 19:33:01.479419  458984 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token wsdn3t.imc3irb8d74wlxum \
	I1003 19:33:01.479528  458984 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:f66ff31263aa4cda6b17caa2076838d6a1918275f1c2773b90b119c0d4a4d71a \
	I1003 19:33:01.479549  458984 kubeadm.go:318] 	--control-plane 
	I1003 19:33:01.479553  458984 kubeadm.go:318] 
	I1003 19:33:01.479648  458984 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1003 19:33:01.479651  458984 kubeadm.go:318] 
	I1003 19:33:01.479736  458984 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token wsdn3t.imc3irb8d74wlxum \
	I1003 19:33:01.479840  458984 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:f66ff31263aa4cda6b17caa2076838d6a1918275f1c2773b90b119c0d4a4d71a 
	I1003 19:33:01.484037  458984 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1003 19:33:01.484265  458984 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1003 19:33:01.484382  458984 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1003 19:33:01.484397  458984 cni.go:84] Creating CNI manager for ""
	I1003 19:33:01.484404  458984 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 19:33:01.489460  458984 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1003 19:33:01.492401  458984 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1003 19:33:01.496561  458984 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1003 19:33:01.496571  458984 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1003 19:33:01.512407  458984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1003 19:33:01.804643  458984 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1003 19:33:01.804713  458984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 19:33:01.804805  458984 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes cert-expiration-324520 minikube.k8s.io/updated_at=2025_10_03T19_33_01_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=a43873c79fc22f8b1ccd29d3dfa635d392b09335 minikube.k8s.io/name=cert-expiration-324520 minikube.k8s.io/primary=true
	I1003 19:33:01.988057  458984 kubeadm.go:1113] duration metric: took 183.424958ms to wait for elevateKubeSystemPrivileges
	I1003 19:33:01.988028  458984 ops.go:34] apiserver oom_adj: -16
	I1003 19:33:01.988083  458984 kubeadm.go:402] duration metric: took 15.985542694s to StartCluster
	I1003 19:33:01.988099  458984 settings.go:142] acquiring lock: {Name:mkc95577dbc448e3409dfa2b5e53a3a1327cb451 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:33:01.988179  458984 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21625-284583/kubeconfig
	I1003 19:33:01.989133  458984 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/kubeconfig: {Name:mkc1323fd87f4a78231a26d2dab0dff7feecf1e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:33:01.989441  458984 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1003 19:33:01.989723  458984 config.go:182] Loaded profile config "cert-expiration-324520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 19:33:01.989770  458984 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1003 19:33:01.989844  458984 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1003 19:33:01.990308  458984 addons.go:69] Setting storage-provisioner=true in profile "cert-expiration-324520"
	I1003 19:33:01.990316  458984 addons.go:69] Setting default-storageclass=true in profile "cert-expiration-324520"
	I1003 19:33:01.990333  458984 addons.go:238] Setting addon storage-provisioner=true in "cert-expiration-324520"
	I1003 19:33:01.990335  458984 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "cert-expiration-324520"
	I1003 19:33:01.990357  458984 host.go:66] Checking if "cert-expiration-324520" exists ...
	I1003 19:33:01.990672  458984 cli_runner.go:164] Run: docker container inspect cert-expiration-324520 --format={{.State.Status}}
	I1003 19:33:01.990922  458984 cli_runner.go:164] Run: docker container inspect cert-expiration-324520 --format={{.State.Status}}
	I1003 19:33:01.995547  458984 out.go:179] * Verifying Kubernetes components...
	I1003 19:33:02.000961  458984 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 19:33:02.039770  458984 addons.go:238] Setting addon default-storageclass=true in "cert-expiration-324520"
	I1003 19:33:02.039798  458984 host.go:66] Checking if "cert-expiration-324520" exists ...
	I1003 19:33:02.040213  458984 cli_runner.go:164] Run: docker container inspect cert-expiration-324520 --format={{.State.Status}}
	I1003 19:33:02.052817  458984 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 19:33:02.056963  458984 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 19:33:02.056975  458984 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1003 19:33:02.057040  458984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-324520
	I1003 19:33:02.080081  458984 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1003 19:33:02.080096  458984 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1003 19:33:02.080169  458984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-324520
	I1003 19:33:02.099989  458984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/cert-expiration-324520/id_rsa Username:docker}
	I1003 19:33:02.130918  458984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/cert-expiration-324520/id_rsa Username:docker}
	I1003 19:33:02.297490  458984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 19:33:02.299947  458984 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1003 19:33:02.335171  458984 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 19:33:02.363339  458984 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1003 19:33:02.887060  458984 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1003 19:33:02.888813  458984 api_server.go:52] waiting for apiserver process to appear ...
	I1003 19:33:02.888860  458984 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 19:33:02.911999  458984 api_server.go:72] duration metric: took 922.199942ms to wait for apiserver process to appear ...
	I1003 19:33:02.912013  458984 api_server.go:88] waiting for apiserver healthz status ...
	I1003 19:33:02.912032  458984 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1003 19:33:02.930578  458984 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1003 19:33:02.932046  458984 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1003 19:33:02.933382  458984 api_server.go:141] control plane version: v1.34.1
	I1003 19:33:02.933397  458984 api_server.go:131] duration metric: took 21.378983ms to wait for apiserver health ...
	I1003 19:33:02.933417  458984 system_pods.go:43] waiting for kube-system pods to appear ...
	I1003 19:33:02.933630  458984 addons.go:514] duration metric: took 943.782775ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1003 19:33:02.936956  458984 system_pods.go:59] 5 kube-system pods found
	I1003 19:33:02.936981  458984 system_pods.go:61] "etcd-cert-expiration-324520" [84c78e42-0b1f-4e5a-8dcd-0c312c3cc472] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1003 19:33:02.936989  458984 system_pods.go:61] "kube-apiserver-cert-expiration-324520" [09e08ace-59d0-4425-acfd-b6d84a1aa37a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1003 19:33:02.937000  458984 system_pods.go:61] "kube-controller-manager-cert-expiration-324520" [ab333a6e-491a-4676-8eed-afb6a46158f0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1003 19:33:02.937006  458984 system_pods.go:61] "kube-scheduler-cert-expiration-324520" [c8d9fb15-c186-45fc-af36-107d0660a1b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1003 19:33:02.937010  458984 system_pods.go:61] "storage-provisioner" [a4de08c9-7177-460e-b26b-52d46c62fffc] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1003 19:33:02.937016  458984 system_pods.go:74] duration metric: took 3.593357ms to wait for pod list to return data ...
	I1003 19:33:02.937027  458984 kubeadm.go:586] duration metric: took 947.233173ms to wait for: map[apiserver:true system_pods:true]
	I1003 19:33:02.937039  458984 node_conditions.go:102] verifying NodePressure condition ...
	I1003 19:33:02.939728  458984 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1003 19:33:02.939757  458984 node_conditions.go:123] node cpu capacity is 2
	I1003 19:33:02.939768  458984 node_conditions.go:105] duration metric: took 2.725084ms to run NodePressure ...
	I1003 19:33:02.939780  458984 start.go:241] waiting for startup goroutines ...
	I1003 19:33:03.390598  458984 kapi.go:214] "coredns" deployment in "kube-system" namespace and "cert-expiration-324520" context rescaled to 1 replicas
	I1003 19:33:03.390644  458984 start.go:246] waiting for cluster config update ...
	I1003 19:33:03.390655  458984 start.go:255] writing updated cluster config ...
	I1003 19:33:03.391008  458984 ssh_runner.go:195] Run: rm -f paused
	I1003 19:33:03.451210  458984 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1003 19:33:03.457337  458984 out.go:179] * Done! kubectl is now configured to use "cert-expiration-324520" cluster and "default" namespace by default
	I1003 19:34:18.347934  454580 kubeadm.go:318] error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.85.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	I1003 19:34:18.348034  454580 kubeadm.go:318] To see the stack trace of this error execute with --v=5 or higher
	I1003 19:34:18.352589  454580 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1003 19:34:18.352654  454580 kubeadm.go:318] [preflight] Running pre-flight checks
	I1003 19:34:18.352782  454580 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1003 19:34:18.352848  454580 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1003 19:34:18.352892  454580 kubeadm.go:318] OS: Linux
	I1003 19:34:18.352946  454580 kubeadm.go:318] CGROUPS_CPU: enabled
	I1003 19:34:18.353004  454580 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1003 19:34:18.353067  454580 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1003 19:34:18.353135  454580 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1003 19:34:18.353192  454580 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1003 19:34:18.353254  454580 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1003 19:34:18.353310  454580 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1003 19:34:18.353369  454580 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1003 19:34:18.353423  454580 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1003 19:34:18.353507  454580 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1003 19:34:18.353618  454580 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1003 19:34:18.353727  454580 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1003 19:34:18.353802  454580 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1003 19:34:18.356930  454580 out.go:252]   - Generating certificates and keys ...
	I1003 19:34:18.357029  454580 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1003 19:34:18.357104  454580 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1003 19:34:18.357191  454580 kubeadm.go:318] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1003 19:34:18.357263  454580 kubeadm.go:318] [certs] Using existing front-proxy-ca certificate authority
	I1003 19:34:18.357341  454580 kubeadm.go:318] [certs] Using existing front-proxy-client certificate and key on disk
	I1003 19:34:18.357401  454580 kubeadm.go:318] [certs] Using existing etcd/ca certificate authority
	I1003 19:34:18.357471  454580 kubeadm.go:318] [certs] Using existing etcd/server certificate and key on disk
	I1003 19:34:18.357549  454580 kubeadm.go:318] [certs] Using existing etcd/peer certificate and key on disk
	I1003 19:34:18.357641  454580 kubeadm.go:318] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1003 19:34:18.357724  454580 kubeadm.go:318] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1003 19:34:18.357770  454580 kubeadm.go:318] [certs] Using the existing "sa" key
	I1003 19:34:18.357834  454580 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1003 19:34:18.357891  454580 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1003 19:34:18.357954  454580 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1003 19:34:18.358013  454580 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1003 19:34:18.358082  454580 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1003 19:34:18.358143  454580 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1003 19:34:18.358243  454580 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1003 19:34:18.358316  454580 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1003 19:34:18.361254  454580 out.go:252]   - Booting up control plane ...
	I1003 19:34:18.361399  454580 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1003 19:34:18.361527  454580 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1003 19:34:18.361603  454580 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1003 19:34:18.361718  454580 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1003 19:34:18.361821  454580 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1003 19:34:18.361935  454580 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1003 19:34:18.362025  454580 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1003 19:34:18.362068  454580 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1003 19:34:18.362209  454580 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1003 19:34:18.362321  454580 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1003 19:34:18.362392  454580 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 2.000785241s
	I1003 19:34:18.362493  454580 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1003 19:34:18.362582  454580 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1003 19:34:18.362696  454580 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1003 19:34:18.362790  454580 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1003 19:34:18.362880  454580 kubeadm.go:318] [control-plane-check] kube-controller-manager is not healthy after 4m0.000653232s
	I1003 19:34:18.362959  454580 kubeadm.go:318] [control-plane-check] kube-apiserver is not healthy after 4m0.000759408s
	I1003 19:34:18.363039  454580 kubeadm.go:318] [control-plane-check] kube-scheduler is not healthy after 4m0.005894118s
	I1003 19:34:18.363048  454580 kubeadm.go:318] 
	I1003 19:34:18.363150  454580 kubeadm.go:318] A control plane component may have crashed or exited when started by the container runtime.
	I1003 19:34:18.363247  454580 kubeadm.go:318] To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1003 19:34:18.363342  454580 kubeadm.go:318] Here is one example how you may list all running Kubernetes containers by using crictl:
	I1003 19:34:18.363443  454580 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1003 19:34:18.363523  454580 kubeadm.go:318] 	Once you have found the failing container, you can inspect its logs with:
	I1003 19:34:18.363612  454580 kubeadm.go:318] 	- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	I1003 19:34:18.363621  454580 kubeadm.go:318] 
	I1003 19:34:18.363685  454580 kubeadm.go:402] duration metric: took 8m14.669107964s to StartCluster
	I1003 19:34:18.363724  454580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 19:34:18.363792  454580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 19:34:18.389290  454580 cri.go:89] found id: ""
	I1003 19:34:18.389325  454580 logs.go:282] 0 containers: []
	W1003 19:34:18.389335  454580 logs.go:284] No container was found matching "kube-apiserver"
	I1003 19:34:18.389341  454580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 19:34:18.389399  454580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 19:34:18.415011  454580 cri.go:89] found id: ""
	I1003 19:34:18.415033  454580 logs.go:282] 0 containers: []
	W1003 19:34:18.415041  454580 logs.go:284] No container was found matching "etcd"
	I1003 19:34:18.415047  454580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 19:34:18.415154  454580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 19:34:18.441330  454580 cri.go:89] found id: ""
	I1003 19:34:18.441366  454580 logs.go:282] 0 containers: []
	W1003 19:34:18.441375  454580 logs.go:284] No container was found matching "coredns"
	I1003 19:34:18.441382  454580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 19:34:18.441484  454580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 19:34:18.468767  454580 cri.go:89] found id: ""
	I1003 19:34:18.468794  454580 logs.go:282] 0 containers: []
	W1003 19:34:18.468802  454580 logs.go:284] No container was found matching "kube-scheduler"
	I1003 19:34:18.468809  454580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 19:34:18.468870  454580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 19:34:18.495249  454580 cri.go:89] found id: ""
	I1003 19:34:18.495281  454580 logs.go:282] 0 containers: []
	W1003 19:34:18.495290  454580 logs.go:284] No container was found matching "kube-proxy"
	I1003 19:34:18.495298  454580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 19:34:18.495358  454580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 19:34:18.521388  454580 cri.go:89] found id: ""
	I1003 19:34:18.521420  454580 logs.go:282] 0 containers: []
	W1003 19:34:18.521428  454580 logs.go:284] No container was found matching "kube-controller-manager"
	I1003 19:34:18.521435  454580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 19:34:18.521505  454580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 19:34:18.547437  454580 cri.go:89] found id: ""
	I1003 19:34:18.547479  454580 logs.go:282] 0 containers: []
	W1003 19:34:18.547488  454580 logs.go:284] No container was found matching "kindnet"
	I1003 19:34:18.547498  454580 logs.go:123] Gathering logs for kubelet ...
	I1003 19:34:18.547509  454580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 19:34:18.634059  454580 logs.go:123] Gathering logs for dmesg ...
	I1003 19:34:18.634097  454580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 19:34:18.650724  454580 logs.go:123] Gathering logs for describe nodes ...
	I1003 19:34:18.650751  454580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 19:34:18.721952  454580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 19:34:18.714182    2361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 19:34:18.714727    2361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 19:34:18.715865    2361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 19:34:18.716330    2361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 19:34:18.717796    2361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1003 19:34:18.714182    2361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 19:34:18.714727    2361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 19:34:18.715865    2361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 19:34:18.716330    2361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 19:34:18.717796    2361 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 19:34:18.722018  454580 logs.go:123] Gathering logs for CRI-O ...
	I1003 19:34:18.722047  454580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 19:34:18.796560  454580 logs.go:123] Gathering logs for container status ...
	I1003 19:34:18.796594  454580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1003 19:34:18.829645  454580 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 2.000785241s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000653232s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000759408s
	[control-plane-check] kube-scheduler is not healthy after 4m0.005894118s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.85.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	W1003 19:34:18.829696  454580 out.go:285] * 
	W1003 19:34:18.829776  454580 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 2.000785241s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000653232s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000759408s
	[control-plane-check] kube-scheduler is not healthy after 4m0.005894118s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.85.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1003 19:34:18.830053  454580 out.go:285] * 
	W1003 19:34:18.832321  454580 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 19:34:18.838675  454580 out.go:203] 
	W1003 19:34:18.841589  454580 out.go:285] X Exiting due to GUEST_START: failed to start node: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.34.1
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is healthy after 2.000785241s
	[control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	[control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	[control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	[control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	[control-plane-check] kube-controller-manager is not healthy after 4m0.000653232s
	[control-plane-check] kube-apiserver is not healthy after 4m0.000759408s
	[control-plane-check] kube-scheduler is not healthy after 4m0.005894118s
	
	A control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.
	Here is one example how you may list all running Kubernetes containers by using crictl:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'crictl --runtime-endpoint unix:///var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the control plane to start: [kube-controller-manager check failed at https://127.0.0.1:10257/healthz: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused, kube-apiserver check failed at https://192.168.85.2:8443/livez: Get "https://control-plane.minikube.internal:8443/livez?timeout=10s": dial tcp 192.168.85.2:8443: connect: connection refused, kube-scheduler check failed at https://127.0.0.1:10259/livez: Get "https://127.0.0.1:10259/livez": dial tcp 127.0.0.1:10259: connect: connection refused]
	To see the stack trace of this error execute with --v=5 or higher
	
	W1003 19:34:18.841618  454580 out.go:285] * 
	I1003 19:34:18.844712  454580 out.go:203] 
	
	
	==> CRI-O <==
	Oct 03 19:34:09 force-systemd-env-159095 crio[840]: time="2025-10-03T19:34:09.094637841Z" level=info msg="createCtr: deleting container f3bec041a0feaeb506e458d3461b26a1bc69758223cb2f468345017292ad0fe9 from storage" id=5c7e1f07-a4d4-4b8f-b9fd-09d2668d41c9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:34:09 force-systemd-env-159095 crio[840]: time="2025-10-03T19:34:09.09577526Z" level=info msg="createCtr: releasing container name k8s_kube-scheduler_kube-scheduler-force-systemd-env-159095_kube-system_713b241416791ed6d8e49fd8d758539f_0" id=a2e23608-d9ba-4f06-9aad-a4f6b1bd9956 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:34:09 force-systemd-env-159095 crio[840]: time="2025-10-03T19:34:09.09847787Z" level=info msg="createCtr: releasing container name k8s_kube-controller-manager_kube-controller-manager-force-systemd-env-159095_kube-system_8edc91c9b01b43efa86c58dae190d5a3_0" id=5c7e1f07-a4d4-4b8f-b9fd-09d2668d41c9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:34:10 force-systemd-env-159095 crio[840]: time="2025-10-03T19:34:10.059885581Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=dc42808a-d612-4d32-9e89-69ced9686c85 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 19:34:10 force-systemd-env-159095 crio[840]: time="2025-10-03T19:34:10.060261537Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=62ed4aee-1d36-42f9-95fb-747f756e7e6d name=/runtime.v1.ImageService/ImageStatus
	Oct 03 19:34:10 force-systemd-env-159095 crio[840]: time="2025-10-03T19:34:10.061313687Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=5d299939-0547-46a3-a83f-de4bad8025fd name=/runtime.v1.ImageService/ImageStatus
	Oct 03 19:34:10 force-systemd-env-159095 crio[840]: time="2025-10-03T19:34:10.06137731Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.4-0" id=6a1208c9-b5f6-4ef4-87c1-1c25e2616a8c name=/runtime.v1.ImageService/ImageStatus
	Oct 03 19:34:10 force-systemd-env-159095 crio[840]: time="2025-10-03T19:34:10.062840666Z" level=info msg="Creating container: kube-system/kube-apiserver-force-systemd-env-159095/kube-apiserver" id=825570b9-5594-4934-851a-dd04dfb7b080 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:34:10 force-systemd-env-159095 crio[840]: time="2025-10-03T19:34:10.063106418Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 19:34:10 force-systemd-env-159095 crio[840]: time="2025-10-03T19:34:10.065212121Z" level=info msg="Creating container: kube-system/etcd-force-systemd-env-159095/etcd" id=2b074319-3c15-461c-b1eb-29dcbe63ff8e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:34:10 force-systemd-env-159095 crio[840]: time="2025-10-03T19:34:10.066282757Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 19:34:10 force-systemd-env-159095 crio[840]: time="2025-10-03T19:34:10.067760809Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 19:34:10 force-systemd-env-159095 crio[840]: time="2025-10-03T19:34:10.068247946Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 19:34:10 force-systemd-env-159095 crio[840]: time="2025-10-03T19:34:10.086693168Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 19:34:10 force-systemd-env-159095 crio[840]: time="2025-10-03T19:34:10.08720895Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 19:34:10 force-systemd-env-159095 crio[840]: time="2025-10-03T19:34:10.095582171Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=825570b9-5594-4934-851a-dd04dfb7b080 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:34:10 force-systemd-env-159095 crio[840]: time="2025-10-03T19:34:10.096931408Z" level=info msg="createCtr: deleting container ID 9ad7a4a6ffa7854387dbaa16c834613f638cf185d56a9300c43521e1908d8b59 from idIndex" id=825570b9-5594-4934-851a-dd04dfb7b080 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:34:10 force-systemd-env-159095 crio[840]: time="2025-10-03T19:34:10.09697529Z" level=info msg="createCtr: removing container 9ad7a4a6ffa7854387dbaa16c834613f638cf185d56a9300c43521e1908d8b59" id=825570b9-5594-4934-851a-dd04dfb7b080 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:34:10 force-systemd-env-159095 crio[840]: time="2025-10-03T19:34:10.097016226Z" level=info msg="createCtr: deleting container 9ad7a4a6ffa7854387dbaa16c834613f638cf185d56a9300c43521e1908d8b59 from storage" id=825570b9-5594-4934-851a-dd04dfb7b080 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:34:10 force-systemd-env-159095 crio[840]: time="2025-10-03T19:34:10.099960587Z" level=info msg="createCtr: releasing container name k8s_kube-apiserver_kube-apiserver-force-systemd-env-159095_kube-system_e10bea8f3e6666230e6970f6a0efc4d2_0" id=825570b9-5594-4934-851a-dd04dfb7b080 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:34:10 force-systemd-env-159095 crio[840]: time="2025-10-03T19:34:10.104704744Z" level=error msg="Container creation error: cannot open sd-bus: No such file or directory\n" id=2b074319-3c15-461c-b1eb-29dcbe63ff8e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:34:10 force-systemd-env-159095 crio[840]: time="2025-10-03T19:34:10.108176136Z" level=info msg="createCtr: deleting container ID 8895580afab7c75ffd9f2f280b5c05de9da91ef262e76f54af8b20ee18f8bb5c from idIndex" id=2b074319-3c15-461c-b1eb-29dcbe63ff8e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:34:10 force-systemd-env-159095 crio[840]: time="2025-10-03T19:34:10.108229092Z" level=info msg="createCtr: removing container 8895580afab7c75ffd9f2f280b5c05de9da91ef262e76f54af8b20ee18f8bb5c" id=2b074319-3c15-461c-b1eb-29dcbe63ff8e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:34:10 force-systemd-env-159095 crio[840]: time="2025-10-03T19:34:10.108271554Z" level=info msg="createCtr: deleting container 8895580afab7c75ffd9f2f280b5c05de9da91ef262e76f54af8b20ee18f8bb5c from storage" id=2b074319-3c15-461c-b1eb-29dcbe63ff8e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:34:10 force-systemd-env-159095 crio[840]: time="2025-10-03T19:34:10.111025152Z" level=info msg="createCtr: releasing container name k8s_etcd_etcd-force-systemd-env-159095_kube-system_e3fbb69c59efca3799dcb683ac5c108a_0" id=2b074319-3c15-461c-b1eb-29dcbe63ff8e name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1003 19:34:19.858647    2472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 19:34:19.859450    2472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 19:34:19.860950    2472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 19:34:19.861341    2472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1003 19:34:19.863011    2472 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Oct 3 18:59] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:00] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:05] overlayfs: idmapped layers are currently not supported
	[ +33.149550] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:07] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:08] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:09] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:10] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:11] overlayfs: idmapped layers are currently not supported
	[  +4.287643] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:12] overlayfs: idmapped layers are currently not supported
	[ +24.839009] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:13] overlayfs: idmapped layers are currently not supported
	[ +26.493253] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:15] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:16] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:17] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000010] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Oct 3 19:18] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:20] overlayfs: idmapped layers are currently not supported
	[ +32.018892] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:22] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:24] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:26] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:32] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 19:34:19 up  2:16,  0 user,  load average: 0.39, 0.75, 1.53
	Linux force-systemd-env-159095 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Oct 03 19:34:09 force-systemd-env-159095 kubelet[1790]:         container kube-scheduler start failed in pod kube-scheduler-force-systemd-env-159095_kube-system(713b241416791ed6d8e49fd8d758539f): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 19:34:09 force-systemd-env-159095 kubelet[1790]:  > logger="UnhandledError"
	Oct 03 19:34:09 force-systemd-env-159095 kubelet[1790]: E1003 19:34:09.101332    1790 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-scheduler-force-systemd-env-159095" podUID="713b241416791ed6d8e49fd8d758539f"
	Oct 03 19:34:10 force-systemd-env-159095 kubelet[1790]: E1003 19:34:10.058957    1790 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"force-systemd-env-159095\" not found" node="force-systemd-env-159095"
	Oct 03 19:34:10 force-systemd-env-159095 kubelet[1790]: E1003 19:34:10.059412    1790 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"force-systemd-env-159095\" not found" node="force-systemd-env-159095"
	Oct 03 19:34:10 force-systemd-env-159095 kubelet[1790]: E1003 19:34:10.100391    1790 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 03 19:34:10 force-systemd-env-159095 kubelet[1790]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 03 19:34:10 force-systemd-env-159095 kubelet[1790]:  > podSandboxID="0804ee8c6205f3160beef797894e76b057dfdf0af1f2ff64f979648e534d66de"
	Oct 03 19:34:10 force-systemd-env-159095 kubelet[1790]: E1003 19:34:10.100568    1790 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 03 19:34:10 force-systemd-env-159095 kubelet[1790]:         container kube-apiserver start failed in pod kube-apiserver-force-systemd-env-159095_kube-system(e10bea8f3e6666230e6970f6a0efc4d2): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 19:34:10 force-systemd-env-159095 kubelet[1790]:  > logger="UnhandledError"
	Oct 03 19:34:10 force-systemd-env-159095 kubelet[1790]: E1003 19:34:10.100618    1790 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/kube-apiserver-force-systemd-env-159095" podUID="e10bea8f3e6666230e6970f6a0efc4d2"
	Oct 03 19:34:10 force-systemd-env-159095 kubelet[1790]: E1003 19:34:10.111375    1790 log.go:32] "CreateContainer in sandbox from runtime service failed" err=<
	Oct 03 19:34:10 force-systemd-env-159095 kubelet[1790]:         rpc error: code = Unknown desc = container create failed: cannot open sd-bus: No such file or directory
	Oct 03 19:34:10 force-systemd-env-159095 kubelet[1790]:  > podSandboxID="1cfe0904bed23b8ecbe762e1fe36a5eb5b5c76590c09c4ec9fedcc953f6ebe1e"
	Oct 03 19:34:10 force-systemd-env-159095 kubelet[1790]: E1003 19:34:10.111482    1790 kuberuntime_manager.go:1449] "Unhandled Error" err=<
	Oct 03 19:34:10 force-systemd-env-159095 kubelet[1790]:         container etcd start failed in pod etcd-force-systemd-env-159095_kube-system(e3fbb69c59efca3799dcb683ac5c108a): CreateContainerError: container create failed: cannot open sd-bus: No such file or directory
	Oct 03 19:34:10 force-systemd-env-159095 kubelet[1790]:  > logger="UnhandledError"
	Oct 03 19:34:10 force-systemd-env-159095 kubelet[1790]: E1003 19:34:10.111541    1790 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"etcd\" with CreateContainerError: \"container create failed: cannot open sd-bus: No such file or directory\\n\"" pod="kube-system/etcd-force-systemd-env-159095" podUID="e3fbb69c59efca3799dcb683ac5c108a"
	Oct 03 19:34:12 force-systemd-env-159095 kubelet[1790]: E1003 19:34:12.134146    1790 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://192.168.85.2:8443/api/v1/namespaces/default/events\": dial tcp 192.168.85.2:8443: connect: connection refused" event="&Event{ObjectMeta:{force-systemd-env-159095.186b11f3c4969389  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:force-systemd-env-159095,UID:force-systemd-env-159095,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node force-systemd-env-159095 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:force-systemd-env-159095,},FirstTimestamp:2025-10-03 19:30:18.091238281 +0000 UTC m=+1.753476430,LastTimestamp:2025-10-03 19:30:18.091238281 +0000 UTC m=+1.753476430,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet
,ReportingInstance:force-systemd-env-159095,}"
	Oct 03 19:34:13 force-systemd-env-159095 kubelet[1790]: E1003 19:34:13.934974    1790 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.85.2:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.85.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	Oct 03 19:34:14 force-systemd-env-159095 kubelet[1790]: E1003 19:34:14.703641    1790 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.85.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/force-systemd-env-159095?timeout=10s\": dial tcp 192.168.85.2:8443: connect: connection refused" interval="7s"
	Oct 03 19:34:14 force-systemd-env-159095 kubelet[1790]: I1003 19:34:14.878742    1790 kubelet_node_status.go:75] "Attempting to register node" node="force-systemd-env-159095"
	Oct 03 19:34:14 force-systemd-env-159095 kubelet[1790]: E1003 19:34:14.879140    1790 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.85.2:8443/api/v1/nodes\": dial tcp 192.168.85.2:8443: connect: connection refused" node="force-systemd-env-159095"
	Oct 03 19:34:18 force-systemd-env-159095 kubelet[1790]: E1003 19:34:18.125505    1790 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"force-systemd-env-159095\" not found"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-env-159095 -n force-systemd-env-159095
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p force-systemd-env-159095 -n force-systemd-env-159095: exit status 6 (341.503982ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1003 19:34:20.323070  461911 status.go:458] kubeconfig endpoint: get endpoint: "force-systemd-env-159095" does not appear in /home/jenkins/minikube-integration/21625-284583/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:262: status error: exit status 6 (may be ok)
helpers_test.go:264: "force-systemd-env-159095" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "force-systemd-env-159095" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-159095
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-159095: (1.933809073s)
--- FAIL: TestForceSystemdEnv (513.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-680560 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-680560 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-9r2qn" [7c5e6d68-2db7-4a04-8a4a-83a11ad767d8] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-680560 -n functional-680560
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-10-03 18:47:01.45428786 +0000 UTC m=+1222.719365690
functional_test.go:1645: (dbg) Run:  kubectl --context functional-680560 describe po hello-node-connect-7d85dfc575-9r2qn -n default
functional_test.go:1645: (dbg) kubectl --context functional-680560 describe po hello-node-connect-7d85dfc575-9r2qn -n default:
Name:             hello-node-connect-7d85dfc575-9r2qn
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-680560/192.168.49.2
Start Time:       Fri, 03 Oct 2025 18:37:00 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-g8dfh (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-g8dfh:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-9r2qn to functional-680560
Normal   Pulling    6m53s (x5 over 9m58s)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m53s (x5 over 9m58s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m53s (x5 over 9m58s)   kubelet            Error: ErrImagePull
Warning  Failed     4m50s (x20 over 9m58s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m36s (x21 over 9m58s)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-680560 logs hello-node-connect-7d85dfc575-9r2qn -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-680560 logs hello-node-connect-7d85dfc575-9r2qn -n default: exit status 1 (107.727307ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-9r2qn" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-680560 logs hello-node-connect-7d85dfc575-9r2qn -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-680560 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-9r2qn
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-680560/192.168.49.2
Start Time:       Fri, 03 Oct 2025 18:37:00 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-g8dfh (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-g8dfh:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-9r2qn to functional-680560
Normal   Pulling    6m53s (x5 over 9m58s)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m53s (x5 over 9m58s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m53s (x5 over 9m58s)   kubelet            Error: ErrImagePull
Warning  Failed     4m50s (x20 over 9m58s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m36s (x21 over 9m58s)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-680560 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-680560 logs -l app=hello-node-connect: exit status 1 (101.909757ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-9r2qn" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-680560 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-680560 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.96.99.160
IPs:                      10.96.99.160
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31828/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-680560
helpers_test.go:243: (dbg) docker inspect functional-680560:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "618e76c347db74eee4ff1be0f66c8c7defac89a363f0aea4ce653ef5f09efc19",
	        "Created": "2025-10-03T18:34:06.099798603Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 301867,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-03T18:34:06.140214591Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/618e76c347db74eee4ff1be0f66c8c7defac89a363f0aea4ce653ef5f09efc19/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/618e76c347db74eee4ff1be0f66c8c7defac89a363f0aea4ce653ef5f09efc19/hostname",
	        "HostsPath": "/var/lib/docker/containers/618e76c347db74eee4ff1be0f66c8c7defac89a363f0aea4ce653ef5f09efc19/hosts",
	        "LogPath": "/var/lib/docker/containers/618e76c347db74eee4ff1be0f66c8c7defac89a363f0aea4ce653ef5f09efc19/618e76c347db74eee4ff1be0f66c8c7defac89a363f0aea4ce653ef5f09efc19-json.log",
	        "Name": "/functional-680560",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-680560:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-680560",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "618e76c347db74eee4ff1be0f66c8c7defac89a363f0aea4ce653ef5f09efc19",
	                "LowerDir": "/var/lib/docker/overlay2/524fd79877a298c27f23c001c013f71e89cc8cc2e66fde5883dc445f89387e6d-init/diff:/var/lib/docker/overlay2/87b205803817b0b71a214d995ab7e10a92033bbf72d76d6e052f1d21ccecb313/diff",
	                "MergedDir": "/var/lib/docker/overlay2/524fd79877a298c27f23c001c013f71e89cc8cc2e66fde5883dc445f89387e6d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/524fd79877a298c27f23c001c013f71e89cc8cc2e66fde5883dc445f89387e6d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/524fd79877a298c27f23c001c013f71e89cc8cc2e66fde5883dc445f89387e6d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-680560",
	                "Source": "/var/lib/docker/volumes/functional-680560/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-680560",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-680560",
	                "name.minikube.sigs.k8s.io": "functional-680560",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "23f7c16469b5035a7b3f9128df2165547734e1a1e5cb9151db32fd721450b845",
	            "SandboxKey": "/var/run/docker/netns/23f7c16469b5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-680560": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ae:17:dd:3a:b6:9b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5a42b58becc58fc0b8dda05640bf91ea18def4e879b528aadedfc5f5a00c3abe",
	                    "EndpointID": "36cf2a86dcebdca0beee1477ec2c08b7c5087f73bd340cfce8f52cc06380e641",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-680560",
	                        "618e76c347db"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-680560 -n functional-680560
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-680560 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-680560 logs -n 25: (1.43285039s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                            ARGS                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cache   │ functional-680560 cache reload                                                                                             │ functional-680560 │ jenkins │ v1.37.0 │ 03 Oct 25 18:36 UTC │ 03 Oct 25 18:36 UTC │
	│ ssh     │ functional-680560 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                    │ functional-680560 │ jenkins │ v1.37.0 │ 03 Oct 25 18:36 UTC │ 03 Oct 25 18:36 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                           │ minikube          │ jenkins │ v1.37.0 │ 03 Oct 25 18:36 UTC │ 03 Oct 25 18:36 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 03 Oct 25 18:36 UTC │ 03 Oct 25 18:36 UTC │
	│ kubectl │ functional-680560 kubectl -- --context functional-680560 get pods                                                          │ functional-680560 │ jenkins │ v1.37.0 │ 03 Oct 25 18:36 UTC │ 03 Oct 25 18:36 UTC │
	│ start   │ -p functional-680560 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                   │ functional-680560 │ jenkins │ v1.37.0 │ 03 Oct 25 18:36 UTC │ 03 Oct 25 18:36 UTC │
	│ service │ invalid-svc -p functional-680560                                                                                           │ functional-680560 │ jenkins │ v1.37.0 │ 03 Oct 25 18:36 UTC │                     │
	│ config  │ functional-680560 config unset cpus                                                                                        │ functional-680560 │ jenkins │ v1.37.0 │ 03 Oct 25 18:36 UTC │ 03 Oct 25 18:36 UTC │
	│ config  │ functional-680560 config get cpus                                                                                          │ functional-680560 │ jenkins │ v1.37.0 │ 03 Oct 25 18:36 UTC │                     │
	│ config  │ functional-680560 config set cpus 2                                                                                        │ functional-680560 │ jenkins │ v1.37.0 │ 03 Oct 25 18:36 UTC │ 03 Oct 25 18:36 UTC │
	│ config  │ functional-680560 config get cpus                                                                                          │ functional-680560 │ jenkins │ v1.37.0 │ 03 Oct 25 18:36 UTC │ 03 Oct 25 18:36 UTC │
	│ config  │ functional-680560 config unset cpus                                                                                        │ functional-680560 │ jenkins │ v1.37.0 │ 03 Oct 25 18:36 UTC │ 03 Oct 25 18:36 UTC │
	│ ssh     │ functional-680560 ssh -n functional-680560 sudo cat /home/docker/cp-test.txt                                               │ functional-680560 │ jenkins │ v1.37.0 │ 03 Oct 25 18:36 UTC │ 03 Oct 25 18:36 UTC │
	│ config  │ functional-680560 config get cpus                                                                                          │ functional-680560 │ jenkins │ v1.37.0 │ 03 Oct 25 18:36 UTC │                     │
	│ ssh     │ functional-680560 ssh echo hello                                                                                           │ functional-680560 │ jenkins │ v1.37.0 │ 03 Oct 25 18:36 UTC │ 03 Oct 25 18:36 UTC │
	│ cp      │ functional-680560 cp functional-680560:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2350196802/001/cp-test.txt │ functional-680560 │ jenkins │ v1.37.0 │ 03 Oct 25 18:36 UTC │ 03 Oct 25 18:36 UTC │
	│ ssh     │ functional-680560 ssh cat /etc/hostname                                                                                    │ functional-680560 │ jenkins │ v1.37.0 │ 03 Oct 25 18:36 UTC │ 03 Oct 25 18:36 UTC │
	│ ssh     │ functional-680560 ssh -n functional-680560 sudo cat /home/docker/cp-test.txt                                               │ functional-680560 │ jenkins │ v1.37.0 │ 03 Oct 25 18:36 UTC │ 03 Oct 25 18:36 UTC │
	│ tunnel  │ functional-680560 tunnel --alsologtostderr                                                                                 │ functional-680560 │ jenkins │ v1.37.0 │ 03 Oct 25 18:36 UTC │                     │
	│ tunnel  │ functional-680560 tunnel --alsologtostderr                                                                                 │ functional-680560 │ jenkins │ v1.37.0 │ 03 Oct 25 18:36 UTC │                     │
	│ cp      │ functional-680560 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                  │ functional-680560 │ jenkins │ v1.37.0 │ 03 Oct 25 18:36 UTC │ 03 Oct 25 18:36 UTC │
	│ ssh     │ functional-680560 ssh -n functional-680560 sudo cat /tmp/does/not/exist/cp-test.txt                                        │ functional-680560 │ jenkins │ v1.37.0 │ 03 Oct 25 18:36 UTC │ 03 Oct 25 18:36 UTC │
	│ tunnel  │ functional-680560 tunnel --alsologtostderr                                                                                 │ functional-680560 │ jenkins │ v1.37.0 │ 03 Oct 25 18:36 UTC │                     │
	│ addons  │ functional-680560 addons list                                                                                              │ functional-680560 │ jenkins │ v1.37.0 │ 03 Oct 25 18:37 UTC │ 03 Oct 25 18:37 UTC │
	│ addons  │ functional-680560 addons list -o json                                                                                      │ functional-680560 │ jenkins │ v1.37.0 │ 03 Oct 25 18:37 UTC │ 03 Oct 25 18:37 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/03 18:36:09
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 18:36:09.274975  306185 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:36:09.275087  306185 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:36:09.275091  306185 out.go:374] Setting ErrFile to fd 2...
	I1003 18:36:09.275095  306185 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:36:09.275374  306185 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-284583/.minikube/bin
	I1003 18:36:09.275748  306185 out.go:368] Setting JSON to false
	I1003 18:36:09.276652  306185 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":4721,"bootTime":1759511849,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1003 18:36:09.276703  306185 start.go:140] virtualization:  
	I1003 18:36:09.280370  306185 out.go:179] * [functional-680560] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1003 18:36:09.283457  306185 out.go:179]   - MINIKUBE_LOCATION=21625
	I1003 18:36:09.283528  306185 notify.go:220] Checking for updates...
	I1003 18:36:09.287383  306185 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 18:36:09.290467  306185 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21625-284583/kubeconfig
	I1003 18:36:09.293275  306185 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-284583/.minikube
	I1003 18:36:09.296187  306185 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1003 18:36:09.299257  306185 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 18:36:09.302601  306185 config.go:182] Loaded profile config "functional-680560": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:36:09.302695  306185 driver.go:421] Setting default libvirt URI to qemu:///system
	I1003 18:36:09.334201  306185 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1003 18:36:09.334297  306185 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:36:09.391859  306185 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:65 SystemTime:2025-10-03 18:36:09.383080098 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1003 18:36:09.391952  306185 docker.go:318] overlay module found
	I1003 18:36:09.394987  306185 out.go:179] * Using the docker driver based on existing profile
	I1003 18:36:09.397779  306185 start.go:304] selected driver: docker
	I1003 18:36:09.397788  306185 start.go:924] validating driver "docker" against &{Name:functional-680560 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-680560 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 18:36:09.397874  306185 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 18:36:09.397993  306185 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:36:09.456520  306185 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:65 SystemTime:2025-10-03 18:36:09.447641681 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1003 18:36:09.457005  306185 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 18:36:09.457029  306185 cni.go:84] Creating CNI manager for ""
	I1003 18:36:09.457083  306185 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 18:36:09.457147  306185 start.go:348] cluster config:
	{Name:functional-680560 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-680560 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 18:36:09.460336  306185 out.go:179] * Starting "functional-680560" primary control-plane node in "functional-680560" cluster
	I1003 18:36:09.463045  306185 cache.go:123] Beginning downloading kic base image for docker with crio
	I1003 18:36:09.465997  306185 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1003 18:36:09.468882  306185 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 18:36:09.468931  306185 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21625-284583/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1003 18:36:09.468938  306185 cache.go:58] Caching tarball of preloaded images
	I1003 18:36:09.468965  306185 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1003 18:36:09.469022  306185 preload.go:233] Found /home/jenkins/minikube-integration/21625-284583/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1003 18:36:09.469030  306185 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1003 18:36:09.469148  306185 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/functional-680560/config.json ...
	I1003 18:36:09.498284  306185 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1003 18:36:09.498296  306185 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1003 18:36:09.498314  306185 cache.go:232] Successfully downloaded all kic artifacts
	I1003 18:36:09.498335  306185 start.go:360] acquireMachinesLock for functional-680560: {Name:mkc087b75c454509149302fed2ad2ca72bb7de16 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 18:36:09.498404  306185 start.go:364] duration metric: took 52.244µs to acquireMachinesLock for "functional-680560"
	I1003 18:36:09.498422  306185 start.go:96] Skipping create...Using existing machine configuration
	I1003 18:36:09.498426  306185 fix.go:54] fixHost starting: 
	I1003 18:36:09.498686  306185 cli_runner.go:164] Run: docker container inspect functional-680560 --format={{.State.Status}}
	I1003 18:36:09.516412  306185 fix.go:112] recreateIfNeeded on functional-680560: state=Running err=<nil>
	W1003 18:36:09.516438  306185 fix.go:138] unexpected machine state, will restart: <nil>
	I1003 18:36:09.519542  306185 out.go:252] * Updating the running docker "functional-680560" container ...
	I1003 18:36:09.519566  306185 machine.go:93] provisionDockerMachine start ...
	I1003 18:36:09.519657  306185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-680560
	I1003 18:36:09.536802  306185 main.go:141] libmachine: Using SSH client type: native
	I1003 18:36:09.537109  306185 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1003 18:36:09.537116  306185 main.go:141] libmachine: About to run SSH command:
	hostname
	I1003 18:36:09.672463  306185 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-680560
	
	I1003 18:36:09.672484  306185 ubuntu.go:182] provisioning hostname "functional-680560"
	I1003 18:36:09.672546  306185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-680560
	I1003 18:36:09.692264  306185 main.go:141] libmachine: Using SSH client type: native
	I1003 18:36:09.692569  306185 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1003 18:36:09.692583  306185 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-680560 && echo "functional-680560" | sudo tee /etc/hostname
	I1003 18:36:09.834959  306185 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-680560
	
	I1003 18:36:09.835039  306185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-680560
	I1003 18:36:09.852855  306185 main.go:141] libmachine: Using SSH client type: native
	I1003 18:36:09.853180  306185 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1003 18:36:09.853195  306185 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-680560' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-680560/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-680560' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 18:36:09.985115  306185 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 18:36:09.985140  306185 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21625-284583/.minikube CaCertPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21625-284583/.minikube}
	I1003 18:36:09.985163  306185 ubuntu.go:190] setting up certificates
	I1003 18:36:09.985171  306185 provision.go:84] configureAuth start
	I1003 18:36:09.985231  306185 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-680560
	I1003 18:36:10.016464  306185 provision.go:143] copyHostCerts
	I1003 18:36:10.016524  306185 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-284583/.minikube/ca.pem, removing ...
	I1003 18:36:10.016542  306185 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-284583/.minikube/ca.pem
	I1003 18:36:10.016630  306185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21625-284583/.minikube/ca.pem (1082 bytes)
	I1003 18:36:10.016797  306185 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-284583/.minikube/cert.pem, removing ...
	I1003 18:36:10.016803  306185 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-284583/.minikube/cert.pem
	I1003 18:36:10.016835  306185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21625-284583/.minikube/cert.pem (1123 bytes)
	I1003 18:36:10.016922  306185 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-284583/.minikube/key.pem, removing ...
	I1003 18:36:10.016926  306185 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-284583/.minikube/key.pem
	I1003 18:36:10.016954  306185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21625-284583/.minikube/key.pem (1675 bytes)
	I1003 18:36:10.017007  306185 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21625-284583/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca-key.pem org=jenkins.functional-680560 san=[127.0.0.1 192.168.49.2 functional-680560 localhost minikube]
	I1003 18:36:10.807003  306185 provision.go:177] copyRemoteCerts
	I1003 18:36:10.807068  306185 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 18:36:10.807105  306185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-680560
	I1003 18:36:10.825833  306185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/functional-680560/id_rsa Username:docker}
	I1003 18:36:10.920894  306185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 18:36:10.938151  306185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1003 18:36:10.955869  306185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1003 18:36:10.973587  306185 provision.go:87] duration metric: took 988.403939ms to configureAuth
	I1003 18:36:10.973603  306185 ubuntu.go:206] setting minikube options for container-runtime
	I1003 18:36:10.973789  306185 config.go:182] Loaded profile config "functional-680560": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:36:10.973896  306185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-680560
	I1003 18:36:10.991637  306185 main.go:141] libmachine: Using SSH client type: native
	I1003 18:36:10.991927  306185 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1003 18:36:10.991939  306185 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1003 18:36:16.349286  306185 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1003 18:36:16.349298  306185 machine.go:96] duration metric: took 6.829726554s to provisionDockerMachine
	I1003 18:36:16.349308  306185 start.go:293] postStartSetup for "functional-680560" (driver="docker")
	I1003 18:36:16.349317  306185 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 18:36:16.349388  306185 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 18:36:16.349428  306185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-680560
	I1003 18:36:16.367013  306185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/functional-680560/id_rsa Username:docker}
	I1003 18:36:16.460605  306185 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 18:36:16.463877  306185 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1003 18:36:16.463894  306185 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1003 18:36:16.463903  306185 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-284583/.minikube/addons for local assets ...
	I1003 18:36:16.463955  306185 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-284583/.minikube/files for local assets ...
	I1003 18:36:16.464027  306185 filesync.go:149] local asset: /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem -> 2864342.pem in /etc/ssl/certs
	I1003 18:36:16.464098  306185 filesync.go:149] local asset: /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/test/nested/copy/286434/hosts -> hosts in /etc/test/nested/copy/286434
	I1003 18:36:16.464144  306185 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/286434
	I1003 18:36:16.471309  306185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem --> /etc/ssl/certs/2864342.pem (1708 bytes)
	I1003 18:36:16.489292  306185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/test/nested/copy/286434/hosts --> /etc/test/nested/copy/286434/hosts (40 bytes)
	I1003 18:36:16.506651  306185 start.go:296] duration metric: took 157.329371ms for postStartSetup
	I1003 18:36:16.506736  306185 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 18:36:16.506777  306185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-680560
	I1003 18:36:16.525399  306185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/functional-680560/id_rsa Username:docker}
	I1003 18:36:16.618283  306185 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1003 18:36:16.623182  306185 fix.go:56] duration metric: took 7.124748667s for fixHost
	I1003 18:36:16.623196  306185 start.go:83] releasing machines lock for "functional-680560", held for 7.124785262s
	I1003 18:36:16.623270  306185 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-680560
	I1003 18:36:16.640946  306185 ssh_runner.go:195] Run: cat /version.json
	I1003 18:36:16.640995  306185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-680560
	I1003 18:36:16.641020  306185 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 18:36:16.641071  306185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-680560
	I1003 18:36:16.660678  306185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/functional-680560/id_rsa Username:docker}
	I1003 18:36:16.682234  306185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/functional-680560/id_rsa Username:docker}
	I1003 18:36:16.852101  306185 ssh_runner.go:195] Run: systemctl --version
	I1003 18:36:16.858413  306185 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1003 18:36:16.897442  306185 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1003 18:36:16.901643  306185 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 18:36:16.901702  306185 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 18:36:16.909309  306185 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1003 18:36:16.909323  306185 start.go:495] detecting cgroup driver to use...
	I1003 18:36:16.909353  306185 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1003 18:36:16.909416  306185 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 18:36:16.924423  306185 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 18:36:16.937548  306185 docker.go:218] disabling cri-docker service (if available) ...
	I1003 18:36:16.937611  306185 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1003 18:36:16.953135  306185 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1003 18:36:16.966292  306185 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1003 18:36:17.106465  306185 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1003 18:36:17.233493  306185 docker.go:234] disabling docker service ...
	I1003 18:36:17.233565  306185 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1003 18:36:17.249711  306185 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1003 18:36:17.263596  306185 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1003 18:36:17.397733  306185 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1003 18:36:17.538725  306185 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1003 18:36:17.552107  306185 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 18:36:17.567799  306185 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1003 18:36:17.567876  306185 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:36:17.577305  306185 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1003 18:36:17.577365  306185 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:36:17.586786  306185 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:36:17.596652  306185 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:36:17.606495  306185 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 18:36:17.615511  306185 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:36:17.625487  306185 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:36:17.635311  306185 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 18:36:17.644785  306185 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 18:36:17.652862  306185 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 18:36:17.660699  306185 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:36:17.786535  306185 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1003 18:36:17.989159  306185 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1003 18:36:17.989219  306185 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1003 18:36:17.992880  306185 start.go:563] Will wait 60s for crictl version
	I1003 18:36:17.992938  306185 ssh_runner.go:195] Run: which crictl
	I1003 18:36:17.999981  306185 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1003 18:36:18.028955  306185 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1003 18:36:18.029030  306185 ssh_runner.go:195] Run: crio --version
	I1003 18:36:18.058659  306185 ssh_runner.go:195] Run: crio --version
	I1003 18:36:18.091087  306185 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1003 18:36:18.094156  306185 cli_runner.go:164] Run: docker network inspect functional-680560 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 18:36:18.110216  306185 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1003 18:36:18.117507  306185 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1003 18:36:18.120434  306185 kubeadm.go:883] updating cluster {Name:functional-680560 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-680560 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1003 18:36:18.120566  306185 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 18:36:18.120639  306185 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 18:36:18.153842  306185 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 18:36:18.153853  306185 crio.go:433] Images already preloaded, skipping extraction
	I1003 18:36:18.153904  306185 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 18:36:18.180083  306185 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 18:36:18.180094  306185 cache_images.go:85] Images are preloaded, skipping loading
	I1003 18:36:18.180100  306185 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1003 18:36:18.180197  306185 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-680560 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-680560 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1003 18:36:18.180276  306185 ssh_runner.go:195] Run: crio config
	I1003 18:36:18.257483  306185 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1003 18:36:18.257513  306185 cni.go:84] Creating CNI manager for ""
	I1003 18:36:18.257522  306185 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 18:36:18.257530  306185 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1003 18:36:18.257552  306185 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-680560 NodeName:functional-680560 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:ma
p[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1003 18:36:18.257678  306185 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-680560"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 18:36:18.257749  306185 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1003 18:36:18.266024  306185 binaries.go:44] Found k8s binaries, skipping transfer
	I1003 18:36:18.266095  306185 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1003 18:36:18.273625  306185 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1003 18:36:18.286732  306185 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 18:36:18.299871  306185 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2064 bytes)
	I1003 18:36:18.312100  306185 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1003 18:36:18.315839  306185 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:36:18.454853  306185 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 18:36:18.468615  306185 certs.go:69] Setting up /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/functional-680560 for IP: 192.168.49.2
	I1003 18:36:18.468625  306185 certs.go:195] generating shared ca certs ...
	I1003 18:36:18.468639  306185 certs.go:227] acquiring lock for ca certs: {Name:mk5a10e6c921326e9c211447576eaeb893259ba7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:36:18.468837  306185 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21625-284583/.minikube/ca.key
	I1003 18:36:18.468903  306185 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.key
	I1003 18:36:18.468909  306185 certs.go:257] generating profile certs ...
	I1003 18:36:18.468987  306185 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/functional-680560/client.key
	I1003 18:36:18.469027  306185 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/functional-680560/apiserver.key.f00d2b72
	I1003 18:36:18.469064  306185 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/functional-680560/proxy-client.key
	I1003 18:36:18.469167  306185 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/286434.pem (1338 bytes)
	W1003 18:36:18.469202  306185 certs.go:480] ignoring /home/jenkins/minikube-integration/21625-284583/.minikube/certs/286434_empty.pem, impossibly tiny 0 bytes
	I1003 18:36:18.469209  306185 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 18:36:18.469231  306185 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem (1082 bytes)
	I1003 18:36:18.469252  306185 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem (1123 bytes)
	I1003 18:36:18.469272  306185 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/key.pem (1675 bytes)
	I1003 18:36:18.469312  306185 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem (1708 bytes)
	I1003 18:36:18.469863  306185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 18:36:18.488810  306185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1003 18:36:18.506567  306185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 18:36:18.523751  306185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1003 18:36:18.540943  306185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/functional-680560/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1003 18:36:18.558455  306185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/functional-680560/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1003 18:36:18.576340  306185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/functional-680560/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 18:36:18.594274  306185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/functional-680560/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1003 18:36:18.611795  306185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/certs/286434.pem --> /usr/share/ca-certificates/286434.pem (1338 bytes)
	I1003 18:36:18.629530  306185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem --> /usr/share/ca-certificates/2864342.pem (1708 bytes)
	I1003 18:36:18.646872  306185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 18:36:18.664081  306185 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 18:36:18.676626  306185 ssh_runner.go:195] Run: openssl version
	I1003 18:36:18.683188  306185 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/286434.pem && ln -fs /usr/share/ca-certificates/286434.pem /etc/ssl/certs/286434.pem"
	I1003 18:36:18.691031  306185 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/286434.pem
	I1003 18:36:18.694621  306185 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  3 18:34 /usr/share/ca-certificates/286434.pem
	I1003 18:36:18.694676  306185 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/286434.pem
	I1003 18:36:18.735522  306185 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/286434.pem /etc/ssl/certs/51391683.0"
	I1003 18:36:18.743354  306185 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2864342.pem && ln -fs /usr/share/ca-certificates/2864342.pem /etc/ssl/certs/2864342.pem"
	I1003 18:36:18.751479  306185 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2864342.pem
	I1003 18:36:18.754993  306185 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  3 18:34 /usr/share/ca-certificates/2864342.pem
	I1003 18:36:18.755048  306185 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2864342.pem
	I1003 18:36:18.796086  306185 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2864342.pem /etc/ssl/certs/3ec20f2e.0"
	I1003 18:36:18.803934  306185 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 18:36:18.812263  306185 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:36:18.815946  306185 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  3 18:27 /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:36:18.816039  306185 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 18:36:18.857030  306185 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 18:36:18.864795  306185 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 18:36:18.868404  306185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1003 18:36:18.908988  306185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1003 18:36:18.949718  306185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1003 18:36:18.991967  306185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1003 18:36:19.034075  306185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1003 18:36:19.074997  306185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1003 18:36:19.121155  306185 kubeadm.go:400] StartCluster: {Name:functional-680560 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-680560 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 18:36:19.121235  306185 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1003 18:36:19.121305  306185 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1003 18:36:19.161979  306185 cri.go:89] found id: "50b3ff83dc12a578e128875d8d905f670ea595c131bc422ec1e99107f068a890"
	I1003 18:36:19.161994  306185 cri.go:89] found id: "a4978a363dcb465740da6e619c39a1e3fefed7177a13113e490cb551bb32deb4"
	I1003 18:36:19.161998  306185 cri.go:89] found id: "1ca956066bcf54c766db4a8f72b89a570865a5909f48187926d6f01f529de041"
	I1003 18:36:19.162001  306185 cri.go:89] found id: "4d61a53139c246e76a1e6a68ae4a813b1ef1326fc119f299bdd77293d19fd165"
	I1003 18:36:19.162003  306185 cri.go:89] found id: "b2024c39132e2a947179204d2ea1e577fd134d7328fafd5507a92accf165bb67"
	I1003 18:36:19.162006  306185 cri.go:89] found id: "27942fcd1ecf9ded8f4b3f0a3d6749b506c537d3159432749cccb976422b29a1"
	I1003 18:36:19.162008  306185 cri.go:89] found id: "0deaa54466556ec75b1b0aba336bbeee2273a75f291beb48c2f7f5b046d88e2f"
	I1003 18:36:19.162010  306185 cri.go:89] found id: "c83aa27892436f1efa5c919e3729d68a80db430d03da0b85f90fdf2314dc16a6"
	I1003 18:36:19.162012  306185 cri.go:89] found id: "1e35536d2cba6e752975c518052e82a4bd46361a2e29fa209092618ba16218ac"
	I1003 18:36:19.162019  306185 cri.go:89] found id: "35e3494c288d12b1e997c27f7a4e3c6915528be3c50a1a5a543216e116337415"
	I1003 18:36:19.162021  306185 cri.go:89] found id: "449950078c6d65435b961e8107fb4d6d1aa4f934772fcd3bf24e6351ae034a45"
	I1003 18:36:19.162033  306185 cri.go:89] found id: "6987c118bde34d2226de0a1d81d74c6e0ec9e47490d9f9d6e1e495845957370c"
	I1003 18:36:19.162036  306185 cri.go:89] found id: "ca2bf0a5109e8c3e3884c8fdaa47fca0438eb5c65fab58be8a266926c37d8a4e"
	I1003 18:36:19.162038  306185 cri.go:89] found id: "a3227b4d444d38e6fc474ba2e5c12ec7d8da83c446b7f139ef7d53e02fcc6c12"
	I1003 18:36:19.162040  306185 cri.go:89] found id: ""
	I1003 18:36:19.162090  306185 ssh_runner.go:195] Run: sudo runc list -f json
	W1003 18:36:19.176037  306185 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-03T18:36:19Z" level=error msg="open /run/runc: no such file or directory"
	I1003 18:36:19.176116  306185 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 18:36:19.186137  306185 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1003 18:36:19.186157  306185 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1003 18:36:19.186208  306185 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1003 18:36:19.194343  306185 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1003 18:36:19.194854  306185 kubeconfig.go:125] found "functional-680560" server: "https://192.168.49.2:8441"
	I1003 18:36:19.196103  306185 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1003 18:36:19.205619  306185 kubeadm.go:644] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-10-03 18:34:15.114258669 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-10-03 18:36:18.306319847 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1003 18:36:19.205638  306185 kubeadm.go:1160] stopping kube-system containers ...
	I1003 18:36:19.205649  306185 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1003 18:36:19.205704  306185 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1003 18:36:19.240690  306185 cri.go:89] found id: "50b3ff83dc12a578e128875d8d905f670ea595c131bc422ec1e99107f068a890"
	I1003 18:36:19.240701  306185 cri.go:89] found id: "a4978a363dcb465740da6e619c39a1e3fefed7177a13113e490cb551bb32deb4"
	I1003 18:36:19.240705  306185 cri.go:89] found id: "1ca956066bcf54c766db4a8f72b89a570865a5909f48187926d6f01f529de041"
	I1003 18:36:19.240708  306185 cri.go:89] found id: "4d61a53139c246e76a1e6a68ae4a813b1ef1326fc119f299bdd77293d19fd165"
	I1003 18:36:19.240711  306185 cri.go:89] found id: "b2024c39132e2a947179204d2ea1e577fd134d7328fafd5507a92accf165bb67"
	I1003 18:36:19.240715  306185 cri.go:89] found id: "27942fcd1ecf9ded8f4b3f0a3d6749b506c537d3159432749cccb976422b29a1"
	I1003 18:36:19.240719  306185 cri.go:89] found id: "0deaa54466556ec75b1b0aba336bbeee2273a75f291beb48c2f7f5b046d88e2f"
	I1003 18:36:19.240740  306185 cri.go:89] found id: "c83aa27892436f1efa5c919e3729d68a80db430d03da0b85f90fdf2314dc16a6"
	I1003 18:36:19.240743  306185 cri.go:89] found id: "35e3494c288d12b1e997c27f7a4e3c6915528be3c50a1a5a543216e116337415"
	I1003 18:36:19.240749  306185 cri.go:89] found id: ""
	I1003 18:36:19.240754  306185 cri.go:252] Stopping containers: [50b3ff83dc12a578e128875d8d905f670ea595c131bc422ec1e99107f068a890 a4978a363dcb465740da6e619c39a1e3fefed7177a13113e490cb551bb32deb4 1ca956066bcf54c766db4a8f72b89a570865a5909f48187926d6f01f529de041 4d61a53139c246e76a1e6a68ae4a813b1ef1326fc119f299bdd77293d19fd165 b2024c39132e2a947179204d2ea1e577fd134d7328fafd5507a92accf165bb67 27942fcd1ecf9ded8f4b3f0a3d6749b506c537d3159432749cccb976422b29a1 0deaa54466556ec75b1b0aba336bbeee2273a75f291beb48c2f7f5b046d88e2f c83aa27892436f1efa5c919e3729d68a80db430d03da0b85f90fdf2314dc16a6 35e3494c288d12b1e997c27f7a4e3c6915528be3c50a1a5a543216e116337415]
	I1003 18:36:19.240821  306185 ssh_runner.go:195] Run: which crictl
	I1003 18:36:19.244827  306185 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl stop --timeout=10 50b3ff83dc12a578e128875d8d905f670ea595c131bc422ec1e99107f068a890 a4978a363dcb465740da6e619c39a1e3fefed7177a13113e490cb551bb32deb4 1ca956066bcf54c766db4a8f72b89a570865a5909f48187926d6f01f529de041 4d61a53139c246e76a1e6a68ae4a813b1ef1326fc119f299bdd77293d19fd165 b2024c39132e2a947179204d2ea1e577fd134d7328fafd5507a92accf165bb67 27942fcd1ecf9ded8f4b3f0a3d6749b506c537d3159432749cccb976422b29a1 0deaa54466556ec75b1b0aba336bbeee2273a75f291beb48c2f7f5b046d88e2f c83aa27892436f1efa5c919e3729d68a80db430d03da0b85f90fdf2314dc16a6 35e3494c288d12b1e997c27f7a4e3c6915528be3c50a1a5a543216e116337415
	I1003 18:36:19.311468  306185 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1003 18:36:19.435665  306185 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 18:36:19.443696  306185 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5631 Oct  3 18:34 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Oct  3 18:34 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1972 Oct  3 18:34 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Oct  3 18:34 /etc/kubernetes/scheduler.conf
	
	I1003 18:36:19.443755  306185 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1003 18:36:19.452024  306185 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1003 18:36:19.460061  306185 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1003 18:36:19.460118  306185 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1003 18:36:19.467871  306185 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1003 18:36:19.475368  306185 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1003 18:36:19.475438  306185 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1003 18:36:19.483419  306185 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1003 18:36:19.492021  306185 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1003 18:36:19.492074  306185 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1003 18:36:19.499690  306185 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1003 18:36:19.507722  306185 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1003 18:36:19.556494  306185 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1003 18:36:22.105184  306185 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.548665111s)
	I1003 18:36:22.105242  306185 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1003 18:36:22.318941  306185 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1003 18:36:22.389617  306185 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1003 18:36:22.449480  306185 api_server.go:52] waiting for apiserver process to appear ...
	I1003 18:36:22.449549  306185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:36:22.949956  306185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:36:23.449911  306185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:36:23.463901  306185 api_server.go:72] duration metric: took 1.014430304s to wait for apiserver process to appear ...
	I1003 18:36:23.463916  306185 api_server.go:88] waiting for apiserver healthz status ...
	I1003 18:36:23.463934  306185 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1003 18:36:25.827278  306185 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1003 18:36:25.827292  306185 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1003 18:36:25.827306  306185 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1003 18:36:26.195553  306185 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1003 18:36:26.195567  306185 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1003 18:36:26.195578  306185 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1003 18:36:26.255158  306185 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1003 18:36:26.255179  306185 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1003 18:36:26.464531  306185 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1003 18:36:26.486636  306185 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1003 18:36:26.486658  306185 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1003 18:36:26.963976  306185 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1003 18:36:26.989012  306185 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1003 18:36:26.989033  306185 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1003 18:36:27.464568  306185 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1003 18:36:27.477416  306185 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1003 18:36:27.498088  306185 api_server.go:141] control plane version: v1.34.1
	I1003 18:36:27.498105  306185 api_server.go:131] duration metric: took 4.034183341s to wait for apiserver health ...
	I1003 18:36:27.498121  306185 cni.go:84] Creating CNI manager for ""
	I1003 18:36:27.498127  306185 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 18:36:27.501485  306185 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1003 18:36:27.504485  306185 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1003 18:36:27.509324  306185 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1003 18:36:27.509335  306185 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1003 18:36:27.529848  306185 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1003 18:36:28.290937  306185 system_pods.go:43] waiting for kube-system pods to appear ...
	I1003 18:36:28.309442  306185 system_pods.go:59] 8 kube-system pods found
	I1003 18:36:28.309467  306185 system_pods.go:61] "coredns-66bc5c9577-zdpt7" [20510d73-6b6b-4f70-b559-5accf67ec7db] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1003 18:36:28.309475  306185 system_pods.go:61] "etcd-functional-680560" [7626ac4b-d4f3-4426-a876-9fd4ea823bfc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1003 18:36:28.309479  306185 system_pods.go:61] "kindnet-qdwjz" [6521394e-78d4-4199-8ca2-a9c550abe512] Running
	I1003 18:36:28.309485  306185 system_pods.go:61] "kube-apiserver-functional-680560" [18e731c2-a3ab-4c1b-9c96-fe5b7e384000] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1003 18:36:28.309495  306185 system_pods.go:61] "kube-controller-manager-functional-680560" [b992bb86-2e1c-4182-b240-f89d431b287f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1003 18:36:28.309501  306185 system_pods.go:61] "kube-proxy-h5pw4" [770f6e51-9c27-453d-99e6-9d38e9923917] Running
	I1003 18:36:28.309506  306185 system_pods.go:61] "kube-scheduler-functional-680560" [2c53847e-6486-4235-8f0b-86f44f86fbaf] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1003 18:36:28.309510  306185 system_pods.go:61] "storage-provisioner" [63ff9706-92fd-45b1-8a79-0b55a924642a] Running
	I1003 18:36:28.309515  306185 system_pods.go:74] duration metric: took 18.566826ms to wait for pod list to return data ...
	I1003 18:36:28.309521  306185 node_conditions.go:102] verifying NodePressure condition ...
	I1003 18:36:28.315643  306185 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1003 18:36:28.315661  306185 node_conditions.go:123] node cpu capacity is 2
	I1003 18:36:28.315671  306185 node_conditions.go:105] duration metric: took 6.146208ms to run NodePressure ...
	I1003 18:36:28.315742  306185 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1003 18:36:28.675972  306185 kubeadm.go:728] waiting for restarted kubelet to initialise ...
	I1003 18:36:28.684085  306185 kubeadm.go:743] kubelet initialised
	I1003 18:36:28.684095  306185 kubeadm.go:744] duration metric: took 8.111134ms waiting for restarted kubelet to initialise ...
	I1003 18:36:28.684109  306185 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1003 18:36:28.704110  306185 ops.go:34] apiserver oom_adj: -16
	I1003 18:36:28.704124  306185 kubeadm.go:601] duration metric: took 9.51796185s to restartPrimaryControlPlane
	I1003 18:36:28.704132  306185 kubeadm.go:402] duration metric: took 9.582986732s to StartCluster
	I1003 18:36:28.704148  306185 settings.go:142] acquiring lock: {Name:mkc95577dbc448e3409dfa2b5e53a3a1327cb451 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:36:28.704222  306185 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21625-284583/kubeconfig
	I1003 18:36:28.704927  306185 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/kubeconfig: {Name:mkc1323fd87f4a78231a26d2dab0dff7feecf1e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 18:36:28.705185  306185 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1003 18:36:28.705557  306185 config.go:182] Loaded profile config "functional-680560": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:36:28.705523  306185 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1003 18:36:28.705593  306185 addons.go:69] Setting storage-provisioner=true in profile "functional-680560"
	I1003 18:36:28.705606  306185 addons.go:238] Setting addon storage-provisioner=true in "functional-680560"
	I1003 18:36:28.705609  306185 addons.go:69] Setting default-storageclass=true in profile "functional-680560"
	W1003 18:36:28.705611  306185 addons.go:247] addon storage-provisioner should already be in state true
	I1003 18:36:28.705620  306185 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-680560"
	I1003 18:36:28.705638  306185 host.go:66] Checking if "functional-680560" exists ...
	I1003 18:36:28.705919  306185 cli_runner.go:164] Run: docker container inspect functional-680560 --format={{.State.Status}}
	I1003 18:36:28.706068  306185 cli_runner.go:164] Run: docker container inspect functional-680560 --format={{.State.Status}}
	I1003 18:36:28.710730  306185 out.go:179] * Verifying Kubernetes components...
	I1003 18:36:28.716895  306185 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 18:36:28.758641  306185 addons.go:238] Setting addon default-storageclass=true in "functional-680560"
	W1003 18:36:28.758651  306185 addons.go:247] addon default-storageclass should already be in state true
	I1003 18:36:28.758675  306185 host.go:66] Checking if "functional-680560" exists ...
	I1003 18:36:28.759093  306185 cli_runner.go:164] Run: docker container inspect functional-680560 --format={{.State.Status}}
	I1003 18:36:28.759665  306185 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 18:36:28.762579  306185 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:36:28.762588  306185 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1003 18:36:28.762652  306185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-680560
	I1003 18:36:28.787621  306185 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1003 18:36:28.787634  306185 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1003 18:36:28.787698  306185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-680560
	I1003 18:36:28.800033  306185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/functional-680560/id_rsa Username:docker}
	I1003 18:36:28.832909  306185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/functional-680560/id_rsa Username:docker}
	I1003 18:36:28.970477  306185 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 18:36:29.060498  306185 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 18:36:29.075469  306185 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1003 18:36:30.251175  306185 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.190648016s)
	I1003 18:36:30.251203  306185 node_ready.go:35] waiting up to 6m0s for node "functional-680560" to be "Ready" ...
	I1003 18:36:30.251381  306185 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.175898416s)
	I1003 18:36:30.251510  306185 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.281017261s)
	I1003 18:36:30.255456  306185 node_ready.go:49] node "functional-680560" is "Ready"
	I1003 18:36:30.255472  306185 node_ready.go:38] duration metric: took 4.247671ms for node "functional-680560" to be "Ready" ...
	I1003 18:36:30.255484  306185 api_server.go:52] waiting for apiserver process to appear ...
	I1003 18:36:30.255554  306185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:36:30.272652  306185 api_server.go:72] duration metric: took 1.567440607s to wait for apiserver process to appear ...
	I1003 18:36:30.272665  306185 api_server.go:88] waiting for apiserver healthz status ...
	I1003 18:36:30.272683  306185 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1003 18:36:30.279318  306185 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1003 18:36:30.282323  306185 addons.go:514] duration metric: took 1.57678464s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1003 18:36:30.287483  306185 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1003 18:36:30.288511  306185 api_server.go:141] control plane version: v1.34.1
	I1003 18:36:30.288523  306185 api_server.go:131] duration metric: took 15.852887ms to wait for apiserver health ...
	I1003 18:36:30.288530  306185 system_pods.go:43] waiting for kube-system pods to appear ...
	I1003 18:36:30.292557  306185 system_pods.go:59] 8 kube-system pods found
	I1003 18:36:30.292588  306185 system_pods.go:61] "coredns-66bc5c9577-zdpt7" [20510d73-6b6b-4f70-b559-5accf67ec7db] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1003 18:36:30.292597  306185 system_pods.go:61] "etcd-functional-680560" [7626ac4b-d4f3-4426-a876-9fd4ea823bfc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1003 18:36:30.292602  306185 system_pods.go:61] "kindnet-qdwjz" [6521394e-78d4-4199-8ca2-a9c550abe512] Running
	I1003 18:36:30.292609  306185 system_pods.go:61] "kube-apiserver-functional-680560" [18e731c2-a3ab-4c1b-9c96-fe5b7e384000] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1003 18:36:30.292614  306185 system_pods.go:61] "kube-controller-manager-functional-680560" [b992bb86-2e1c-4182-b240-f89d431b287f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1003 18:36:30.292618  306185 system_pods.go:61] "kube-proxy-h5pw4" [770f6e51-9c27-453d-99e6-9d38e9923917] Running
	I1003 18:36:30.292624  306185 system_pods.go:61] "kube-scheduler-functional-680560" [2c53847e-6486-4235-8f0b-86f44f86fbaf] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1003 18:36:30.292627  306185 system_pods.go:61] "storage-provisioner" [63ff9706-92fd-45b1-8a79-0b55a924642a] Running
	I1003 18:36:30.292631  306185 system_pods.go:74] duration metric: took 4.096782ms to wait for pod list to return data ...
	I1003 18:36:30.292637  306185 default_sa.go:34] waiting for default service account to be created ...
	I1003 18:36:30.295289  306185 default_sa.go:45] found service account: "default"
	I1003 18:36:30.295300  306185 default_sa.go:55] duration metric: took 2.658642ms for default service account to be created ...
	I1003 18:36:30.295308  306185 system_pods.go:116] waiting for k8s-apps to be running ...
	I1003 18:36:30.298613  306185 system_pods.go:86] 8 kube-system pods found
	I1003 18:36:30.298631  306185 system_pods.go:89] "coredns-66bc5c9577-zdpt7" [20510d73-6b6b-4f70-b559-5accf67ec7db] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1003 18:36:30.298638  306185 system_pods.go:89] "etcd-functional-680560" [7626ac4b-d4f3-4426-a876-9fd4ea823bfc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1003 18:36:30.298643  306185 system_pods.go:89] "kindnet-qdwjz" [6521394e-78d4-4199-8ca2-a9c550abe512] Running
	I1003 18:36:30.298650  306185 system_pods.go:89] "kube-apiserver-functional-680560" [18e731c2-a3ab-4c1b-9c96-fe5b7e384000] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1003 18:36:30.298655  306185 system_pods.go:89] "kube-controller-manager-functional-680560" [b992bb86-2e1c-4182-b240-f89d431b287f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1003 18:36:30.298658  306185 system_pods.go:89] "kube-proxy-h5pw4" [770f6e51-9c27-453d-99e6-9d38e9923917] Running
	I1003 18:36:30.298663  306185 system_pods.go:89] "kube-scheduler-functional-680560" [2c53847e-6486-4235-8f0b-86f44f86fbaf] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1003 18:36:30.298666  306185 system_pods.go:89] "storage-provisioner" [63ff9706-92fd-45b1-8a79-0b55a924642a] Running
	I1003 18:36:30.298672  306185 system_pods.go:126] duration metric: took 3.359669ms to wait for k8s-apps to be running ...
	I1003 18:36:30.298679  306185 system_svc.go:44] waiting for kubelet service to be running ....
	I1003 18:36:30.298744  306185 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 18:36:30.312972  306185 system_svc.go:56] duration metric: took 14.270159ms WaitForService to wait for kubelet
	I1003 18:36:30.312990  306185 kubeadm.go:586] duration metric: took 1.607784628s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 18:36:30.313006  306185 node_conditions.go:102] verifying NodePressure condition ...
	I1003 18:36:30.315947  306185 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1003 18:36:30.315974  306185 node_conditions.go:123] node cpu capacity is 2
	I1003 18:36:30.315984  306185 node_conditions.go:105] duration metric: took 2.973105ms to run NodePressure ...
	I1003 18:36:30.315995  306185 start.go:241] waiting for startup goroutines ...
	I1003 18:36:30.316001  306185 start.go:246] waiting for cluster config update ...
	I1003 18:36:30.316011  306185 start.go:255] writing updated cluster config ...
	I1003 18:36:30.316325  306185 ssh_runner.go:195] Run: rm -f paused
	I1003 18:36:30.323099  306185 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1003 18:36:30.328227  306185 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-zdpt7" in "kube-system" namespace to be "Ready" or be gone ...
	W1003 18:36:32.334092  306185 pod_ready.go:104] pod "coredns-66bc5c9577-zdpt7" is not "Ready", error: <nil>
	W1003 18:36:34.334721  306185 pod_ready.go:104] pod "coredns-66bc5c9577-zdpt7" is not "Ready", error: <nil>
	I1003 18:36:36.334501  306185 pod_ready.go:94] pod "coredns-66bc5c9577-zdpt7" is "Ready"
	I1003 18:36:36.334514  306185 pod_ready.go:86] duration metric: took 6.006273876s for pod "coredns-66bc5c9577-zdpt7" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 18:36:36.338912  306185 pod_ready.go:83] waiting for pod "etcd-functional-680560" in "kube-system" namespace to be "Ready" or be gone ...
	W1003 18:36:38.344138  306185 pod_ready.go:104] pod "etcd-functional-680560" is not "Ready", error: <nil>
	I1003 18:36:39.844816  306185 pod_ready.go:94] pod "etcd-functional-680560" is "Ready"
	I1003 18:36:39.844830  306185 pod_ready.go:86] duration metric: took 3.505906536s for pod "etcd-functional-680560" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 18:36:39.847255  306185 pod_ready.go:83] waiting for pod "kube-apiserver-functional-680560" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 18:36:39.851831  306185 pod_ready.go:94] pod "kube-apiserver-functional-680560" is "Ready"
	I1003 18:36:39.851845  306185 pod_ready.go:86] duration metric: took 4.57783ms for pod "kube-apiserver-functional-680560" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 18:36:39.854191  306185 pod_ready.go:83] waiting for pod "kube-controller-manager-functional-680560" in "kube-system" namespace to be "Ready" or be gone ...
	W1003 18:36:41.859462  306185 pod_ready.go:104] pod "kube-controller-manager-functional-680560" is not "Ready", error: <nil>
	I1003 18:36:42.359712  306185 pod_ready.go:94] pod "kube-controller-manager-functional-680560" is "Ready"
	I1003 18:36:42.359759  306185 pod_ready.go:86] duration metric: took 2.505557052s for pod "kube-controller-manager-functional-680560" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 18:36:42.361694  306185 pod_ready.go:83] waiting for pod "kube-proxy-h5pw4" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 18:36:42.365763  306185 pod_ready.go:94] pod "kube-proxy-h5pw4" is "Ready"
	I1003 18:36:42.365774  306185 pod_ready.go:86] duration metric: took 4.068957ms for pod "kube-proxy-h5pw4" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 18:36:42.367852  306185 pod_ready.go:83] waiting for pod "kube-scheduler-functional-680560" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 18:36:42.643036  306185 pod_ready.go:94] pod "kube-scheduler-functional-680560" is "Ready"
	I1003 18:36:42.643049  306185 pod_ready.go:86] duration metric: took 275.186213ms for pod "kube-scheduler-functional-680560" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 18:36:42.643059  306185 pod_ready.go:40] duration metric: took 12.319937681s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1003 18:36:42.708629  306185 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1003 18:36:42.713941  306185 out.go:179] * Done! kubectl is now configured to use "functional-680560" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 03 18:37:18 functional-680560 crio[3689]: time="2025-10-03T18:37:18.528945637Z" level=info msg="Got pod network &{Name:hello-node-75c85bcc94-p8dhc Namespace:default ID:fea2f39b510e4d925c95464ffc1a1c07234bc8fb14d1cca713f4128a203b6788 UID:d42e3f3a-befa-44cc-a3b5-2d24a9a9d591 NetNS:/var/run/netns/1df47eaf-56a3-4eb2-9530-57b9d954e903 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400049a6f8}] Aliases:map[]}"
	Oct 03 18:37:18 functional-680560 crio[3689]: time="2025-10-03T18:37:18.529101039Z" level=info msg="Checking pod default_hello-node-75c85bcc94-p8dhc for CNI network kindnet (type=ptp)"
	Oct 03 18:37:18 functional-680560 crio[3689]: time="2025-10-03T18:37:18.532574274Z" level=info msg="Ran pod sandbox fea2f39b510e4d925c95464ffc1a1c07234bc8fb14d1cca713f4128a203b6788 with infra container: default/hello-node-75c85bcc94-p8dhc/POD" id=e5874a23-d385-499b-abcd-91bcd33ac9b5 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 03 18:37:18 functional-680560 crio[3689]: time="2025-10-03T18:37:18.535848225Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=94d8c496-ef3d-490c-89f2-8700d68007c6 name=/runtime.v1.ImageService/PullImage
	Oct 03 18:37:22 functional-680560 crio[3689]: time="2025-10-03T18:37:22.453395218Z" level=info msg="Stopping pod sandbox: b9f36adf6854eb3a3a0da9015d2e8fa9599213f5fce866e417201595cb94f694" id=b51f1264-a7c8-4fd3-9a21-24c472cdb5ee name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 03 18:37:22 functional-680560 crio[3689]: time="2025-10-03T18:37:22.453453082Z" level=info msg="Stopped pod sandbox (already stopped): b9f36adf6854eb3a3a0da9015d2e8fa9599213f5fce866e417201595cb94f694" id=b51f1264-a7c8-4fd3-9a21-24c472cdb5ee name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 03 18:37:22 functional-680560 crio[3689]: time="2025-10-03T18:37:22.454257217Z" level=info msg="Removing pod sandbox: b9f36adf6854eb3a3a0da9015d2e8fa9599213f5fce866e417201595cb94f694" id=ae38a5ea-fb06-4762-ba4a-a90be158f42a name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 03 18:37:22 functional-680560 crio[3689]: time="2025-10-03T18:37:22.457832001Z" level=info msg="Removed pod sandbox: b9f36adf6854eb3a3a0da9015d2e8fa9599213f5fce866e417201595cb94f694" id=ae38a5ea-fb06-4762-ba4a-a90be158f42a name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 03 18:37:22 functional-680560 crio[3689]: time="2025-10-03T18:37:22.458469805Z" level=info msg="Stopping pod sandbox: 949148d7761a70eeac9fd935f7dca64f23dceff084c7438deb03bedb33054158" id=d1bf9293-ca78-454b-bb09-204ad065493e name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 03 18:37:22 functional-680560 crio[3689]: time="2025-10-03T18:37:22.458524986Z" level=info msg="Stopped pod sandbox (already stopped): 949148d7761a70eeac9fd935f7dca64f23dceff084c7438deb03bedb33054158" id=d1bf9293-ca78-454b-bb09-204ad065493e name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 03 18:37:22 functional-680560 crio[3689]: time="2025-10-03T18:37:22.458924743Z" level=info msg="Removing pod sandbox: 949148d7761a70eeac9fd935f7dca64f23dceff084c7438deb03bedb33054158" id=96583c88-5dca-4b63-9ec8-edb86fe4bd4c name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 03 18:37:22 functional-680560 crio[3689]: time="2025-10-03T18:37:22.462349238Z" level=info msg="Removed pod sandbox: 949148d7761a70eeac9fd935f7dca64f23dceff084c7438deb03bedb33054158" id=96583c88-5dca-4b63-9ec8-edb86fe4bd4c name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 03 18:37:22 functional-680560 crio[3689]: time="2025-10-03T18:37:22.462887453Z" level=info msg="Stopping pod sandbox: cb7b1f27388ac19ccaa92d449a4504049d79b5e4996bdb906ed3fc6c1fdb7543" id=8ee69624-add8-49d2-afae-037577fff3d0 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 03 18:37:22 functional-680560 crio[3689]: time="2025-10-03T18:37:22.462937834Z" level=info msg="Stopped pod sandbox (already stopped): cb7b1f27388ac19ccaa92d449a4504049d79b5e4996bdb906ed3fc6c1fdb7543" id=8ee69624-add8-49d2-afae-037577fff3d0 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 03 18:37:22 functional-680560 crio[3689]: time="2025-10-03T18:37:22.463261774Z" level=info msg="Removing pod sandbox: cb7b1f27388ac19ccaa92d449a4504049d79b5e4996bdb906ed3fc6c1fdb7543" id=1cc72bf2-ef0b-47fa-b4e1-3dd2b0955e1c name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 03 18:37:22 functional-680560 crio[3689]: time="2025-10-03T18:37:22.466619821Z" level=info msg="Removed pod sandbox: cb7b1f27388ac19ccaa92d449a4504049d79b5e4996bdb906ed3fc6c1fdb7543" id=1cc72bf2-ef0b-47fa-b4e1-3dd2b0955e1c name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 03 18:37:30 functional-680560 crio[3689]: time="2025-10-03T18:37:30.500596712Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=df537952-1bc1-4d7d-b42f-362a963e1c69 name=/runtime.v1.ImageService/PullImage
	Oct 03 18:37:47 functional-680560 crio[3689]: time="2025-10-03T18:37:47.501289028Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=a7bb5675-43ed-4680-9f01-faf71dd32af6 name=/runtime.v1.ImageService/PullImage
	Oct 03 18:37:57 functional-680560 crio[3689]: time="2025-10-03T18:37:57.501313094Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=677b1675-6ddd-4ea7-a8a9-a0257e8c5614 name=/runtime.v1.ImageService/PullImage
	Oct 03 18:38:39 functional-680560 crio[3689]: time="2025-10-03T18:38:39.501082924Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=82c185c6-bd41-4b99-b139-01811ecc75c4 name=/runtime.v1.ImageService/PullImage
	Oct 03 18:38:49 functional-680560 crio[3689]: time="2025-10-03T18:38:49.500799022Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=e6ace975-d9a9-4655-9f4e-45db8e9b0065 name=/runtime.v1.ImageService/PullImage
	Oct 03 18:40:08 functional-680560 crio[3689]: time="2025-10-03T18:40:08.500703601Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=1558b0b6-1187-4d13-8e79-c8fe80df3721 name=/runtime.v1.ImageService/PullImage
	Oct 03 18:40:10 functional-680560 crio[3689]: time="2025-10-03T18:40:10.500686046Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=e464e19b-74e3-4817-b98c-f71265a83616 name=/runtime.v1.ImageService/PullImage
	Oct 03 18:42:49 functional-680560 crio[3689]: time="2025-10-03T18:42:49.501008884Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=a9dec329-46d1-45b1-9df7-2b0759d6aa95 name=/runtime.v1.ImageService/PullImage
	Oct 03 18:42:54 functional-680560 crio[3689]: time="2025-10-03T18:42:54.501120474Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=1c530738-ca9f-4d1d-9f91-c642bd6d029b name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                             CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	38951e9d64fda       docker.io/library/nginx@sha256:e041cf856a0f3790b5ef37a966f43d872fba48fcf4405fd3e8a28ac5f7436992   9 minutes ago       Running             myfrontend                0                   697dc4fe74d56       sp-pod                                      default
	f8a9d9de634a5       docker.io/library/nginx@sha256:77d740efa8f9c4753f2a7212d8422b8c77411681971f400ea03d07fe38476cac   10 minutes ago      Running             nginx                     0                   1201e9a2056e6       nginx-svc                                   default
	65e948f22ba74       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  10 minutes ago      Running             coredns                   3                   686c2f1d1d2c0       coredns-66bc5c9577-zdpt7                    kube-system
	fe8ef9d65e664       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                  10 minutes ago      Running             kube-proxy                3                   a8e2796b07a32       kube-proxy-h5pw4                            kube-system
	144d09669b189       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  10 minutes ago      Running             storage-provisioner       3                   0d137a1443703       storage-provisioner                         kube-system
	30a2e22422b45       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  10 minutes ago      Running             kindnet-cni               3                   fc24f6c0e46b7       kindnet-qdwjz                               kube-system
	5378edc23a100       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                  10 minutes ago      Running             kube-apiserver            0                   1417d904bc25c       kube-apiserver-functional-680560            kube-system
	6bdfc189c7e3c       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                  10 minutes ago      Running             kube-scheduler            3                   04ee3c6ef4296       kube-scheduler-functional-680560            kube-system
	a3c82a50d32ea       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                  10 minutes ago      Running             etcd                      3                   f9367ff916862       etcd-functional-680560                      kube-system
	269f6a3f8b162       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                  10 minutes ago      Running             kube-controller-manager   3                   ac20a1985e2a6       kube-controller-manager-functional-680560   kube-system
	50b3ff83dc12a       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  11 minutes ago      Exited              coredns                   2                   686c2f1d1d2c0       coredns-66bc5c9577-zdpt7                    kube-system
	a4978a363dcb4       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  11 minutes ago      Exited              storage-provisioner       2                   0d137a1443703       storage-provisioner                         kube-system
	1ca956066bcf5       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  11 minutes ago      Exited              kindnet-cni               2                   fc24f6c0e46b7       kindnet-qdwjz                               kube-system
	4d61a53139c24       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                  11 minutes ago      Exited              kube-proxy                2                   a8e2796b07a32       kube-proxy-h5pw4                            kube-system
	b2024c39132e2       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                  11 minutes ago      Exited              etcd                      2                   f9367ff916862       etcd-functional-680560                      kube-system
	27942fcd1ecf9       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                  11 minutes ago      Exited              kube-controller-manager   2                   ac20a1985e2a6       kube-controller-manager-functional-680560   kube-system
	c83aa27892436       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                  11 minutes ago      Exited              kube-scheduler            2                   04ee3c6ef4296       kube-scheduler-functional-680560            kube-system
	
	
	==> coredns [50b3ff83dc12a578e128875d8d905f670ea595c131bc422ec1e99107f068a890] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38833 - 16737 "HINFO IN 1043855371899309195.1894069568978091108. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.02435854s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [65e948f22ba74e8c0af3736a8eedb3ddeaf6a8da0a98bd3eaf9f3b1c23eaea05] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:57827 - 27015 "HINFO IN 1434626049095952566.5064121711509352308. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014832565s
	
	
	==> describe nodes <==
	Name:               functional-680560
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-680560
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a43873c79fc22f8b1ccd29d3dfa635d392b09335
	                    minikube.k8s.io/name=functional-680560
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_03T18_34_30_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 03 Oct 2025 18:34:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-680560
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 03 Oct 2025 18:46:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 03 Oct 2025 18:45:56 +0000   Fri, 03 Oct 2025 18:34:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 03 Oct 2025 18:45:56 +0000   Fri, 03 Oct 2025 18:34:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 03 Oct 2025 18:45:56 +0000   Fri, 03 Oct 2025 18:34:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 03 Oct 2025 18:45:56 +0000   Fri, 03 Oct 2025 18:35:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-680560
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 3ed8a729a7f1474da4a5e47715c3d907
	  System UUID:                f62f3cae-8530-4a14-ba4d-9bce39e0fd6f
	  Boot ID:                    3762136e-8bec-4104-a5cb-0b1976f6048e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-p8dhc                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m45s
	  default                     hello-node-connect-7d85dfc575-9r2qn          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m53s
	  kube-system                 coredns-66bc5c9577-zdpt7                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-functional-680560                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-qdwjz                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-680560             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-680560    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-h5pw4                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-680560             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node functional-680560 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node functional-680560 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m                kubelet          Node functional-680560 status is now: NodeHasSufficientPID
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           12m                node-controller  Node functional-680560 event: Registered Node functional-680560 in Controller
	  Normal   NodeReady                11m                kubelet          Node functional-680560 status is now: NodeReady
	  Warning  ContainerGCFailed        11m                kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           11m                node-controller  Node functional-680560 event: Registered Node functional-680560 in Controller
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-680560 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-680560 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-680560 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node functional-680560 event: Registered Node functional-680560 in Controller
	
	
	==> dmesg <==
	[Oct 3 17:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.016734] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.507620] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.057770] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.764958] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.639190] kauditd_printk_skb: 36 callbacks suppressed
	[Oct 3 18:16] hrtimer: interrupt took 33359751 ns
	[Oct 3 18:26] kauditd_printk_skb: 8 callbacks suppressed
	[Oct 3 18:27] overlayfs: idmapped layers are currently not supported
	[  +0.053491] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Oct 3 18:33] overlayfs: idmapped layers are currently not supported
	[Oct 3 18:34] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [a3c82a50d32ea61098751b1121ec510b20bbef5da38a37035919555aa528a040] <==
	{"level":"warn","ts":"2025-10-03T18:36:24.444900Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T18:36:24.446720Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T18:36:24.470209Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T18:36:24.487327Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T18:36:24.529656Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T18:36:24.531001Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T18:36:24.573763Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T18:36:24.605340Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T18:36:24.629367Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T18:36:24.642857Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T18:36:24.665603Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T18:36:24.682108Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T18:36:24.694182Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T18:36:24.712236Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T18:36:24.735467Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T18:36:24.752047Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T18:36:24.775120Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T18:36:24.829698Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T18:36:24.852447Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T18:36:24.876855Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T18:36:24.892615Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T18:36:24.964134Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43686","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-03T18:46:23.593004Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1124}
	{"level":"info","ts":"2025-10-03T18:46:23.615826Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1124,"took":"22.446619ms","hash":2651885340,"current-db-size-bytes":3321856,"current-db-size":"3.3 MB","current-db-size-in-use-bytes":1429504,"current-db-size-in-use":"1.4 MB"}
	{"level":"info","ts":"2025-10-03T18:46:23.615892Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2651885340,"revision":1124,"compact-revision":-1}
	
	
	==> etcd [b2024c39132e2a947179204d2ea1e577fd134d7328fafd5507a92accf165bb67] <==
	{"level":"warn","ts":"2025-10-03T18:35:42.966133Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T18:35:42.984351Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T18:35:42.999320Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T18:35:43.031514Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T18:35:43.046636Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T18:35:43.061988Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T18:35:43.131291Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46070","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-03T18:36:11.152959Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-03T18:36:11.153021Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-680560","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-03T18:36:11.153119Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-03T18:36:11.432055Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-03T18:36:11.432219Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-03T18:36:11.432286Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-10-03T18:36:11.432426Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-03T18:36:11.432471Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-10-03T18:36:11.432852Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-03T18:36:11.432912Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-03T18:36:11.432923Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-03T18:36:11.432977Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-03T18:36:11.432991Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-03T18:36:11.432998Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-03T18:36:11.436245Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-03T18:36:11.436326Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-03T18:36:11.436362Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-03T18:36:11.436375Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-680560","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 18:47:03 up  1:29,  0 user,  load average: 0.10, 0.42, 1.58
	Linux functional-680560 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1ca956066bcf54c766db4a8f72b89a570865a5909f48187926d6f01f529de041] <==
	I1003 18:35:40.294356       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1003 18:35:40.312990       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1003 18:35:40.313248       1 main.go:148] setting mtu 1500 for CNI 
	I1003 18:35:40.313294       1 main.go:178] kindnetd IP family: "ipv4"
	I1003 18:35:40.313666       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-03T18:35:40Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1003 18:35:40.512845       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1003 18:35:40.512975       1 controller.go:381] "Waiting for informer caches to sync"
	I1003 18:35:40.513011       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1003 18:35:40.513994       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1003 18:35:44.213944       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1003 18:35:44.213981       1 metrics.go:72] Registering metrics
	I1003 18:35:44.214051       1 controller.go:711] "Syncing nftables rules"
	I1003 18:35:50.509407       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1003 18:35:50.509482       1 main.go:301] handling current node
	I1003 18:36:00.505958       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1003 18:36:00.506006       1 main.go:301] handling current node
	I1003 18:36:10.515041       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1003 18:36:10.515083       1 main.go:301] handling current node
	
	
	==> kindnet [30a2e22422b452e8a44b779672c250163e6cd71f1a8907fc47c3f92b96645cfd] <==
	I1003 18:44:57.200124       1 main.go:301] handling current node
	I1003 18:45:07.196863       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1003 18:45:07.196906       1 main.go:301] handling current node
	I1003 18:45:17.193642       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1003 18:45:17.193678       1 main.go:301] handling current node
	I1003 18:45:27.200100       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1003 18:45:27.200211       1 main.go:301] handling current node
	I1003 18:45:37.194320       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1003 18:45:37.194356       1 main.go:301] handling current node
	I1003 18:45:47.193873       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1003 18:45:47.193911       1 main.go:301] handling current node
	I1003 18:45:57.193243       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1003 18:45:57.193277       1 main.go:301] handling current node
	I1003 18:46:07.193337       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1003 18:46:07.193370       1 main.go:301] handling current node
	I1003 18:46:17.196824       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1003 18:46:17.196858       1 main.go:301] handling current node
	I1003 18:46:27.193236       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1003 18:46:27.193269       1 main.go:301] handling current node
	I1003 18:46:37.193361       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1003 18:46:37.193472       1 main.go:301] handling current node
	I1003 18:46:47.193358       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1003 18:46:47.193395       1 main.go:301] handling current node
	I1003 18:46:57.194824       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1003 18:46:57.194857       1 main.go:301] handling current node
	
	
	==> kube-apiserver [5378edc23a100c16a454228792a0bae119291a0f70a8efb0fbd266f498326d86] <==
	I1003 18:36:26.293116       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1003 18:36:26.293121       1 cache.go:39] Caches are synced for autoregister controller
	I1003 18:36:26.293279       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1003 18:36:26.314284       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1003 18:36:26.316321       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1003 18:36:26.320369       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1003 18:36:26.321602       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1003 18:36:26.321616       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1003 18:36:26.328950       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1003 18:36:26.486962       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1003 18:36:26.784069       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1003 18:36:28.283151       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1003 18:36:28.503831       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1003 18:36:28.642155       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1003 18:36:28.662185       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1003 18:36:30.794784       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1003 18:36:31.127987       1 controller.go:667] quota admission added evaluator for: endpoints
	I1003 18:36:31.180761       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1003 18:36:46.167638       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.97.144.145"}
	I1003 18:36:52.363348       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.106.72.152"}
	I1003 18:37:01.108119       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.96.99.160"}
	E1003 18:37:10.138178       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:35644: use of closed network connection
	E1003 18:37:18.081736       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:35684: use of closed network connection
	I1003 18:37:18.281250       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.107.196.9"}
	I1003 18:46:26.224007       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [269f6a3f8b1625b26ed47ad66287b8ae74ab8f78ee2b4d3af59bb8b9b10dc2d3] <==
	I1003 18:36:30.821486       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1003 18:36:30.821742       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1003 18:36:30.822048       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1003 18:36:30.822075       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1003 18:36:30.822121       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1003 18:36:30.822212       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1003 18:36:30.822292       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1003 18:36:30.822354       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-680560"
	I1003 18:36:30.822400       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1003 18:36:30.823531       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1003 18:36:30.823912       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1003 18:36:30.823932       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1003 18:36:30.823939       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1003 18:36:30.824012       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1003 18:36:30.824670       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1003 18:36:30.826971       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1003 18:36:30.833002       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1003 18:36:30.833081       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1003 18:36:30.833115       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1003 18:36:30.833127       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1003 18:36:30.833133       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1003 18:36:30.840797       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1003 18:36:30.847088       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1003 18:36:30.850407       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1003 18:36:30.860686       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-controller-manager [27942fcd1ecf9ded8f4b3f0a3d6749b506c537d3159432749cccb976422b29a1] <==
	I1003 18:35:47.316244       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1003 18:35:47.318069       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1003 18:35:47.319333       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1003 18:35:47.320005       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1003 18:35:47.322017       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1003 18:35:47.323214       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1003 18:35:47.326437       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1003 18:35:47.328935       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1003 18:35:47.333845       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1003 18:35:47.337077       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1003 18:35:47.337263       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1003 18:35:47.339490       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1003 18:35:47.340411       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1003 18:35:47.343696       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1003 18:35:47.343793       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1003 18:35:47.345977       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1003 18:35:47.349122       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1003 18:35:47.351007       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1003 18:35:47.353324       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1003 18:35:47.356680       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1003 18:35:47.358965       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1003 18:35:47.361170       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1003 18:35:47.364656       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1003 18:35:47.364670       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1003 18:35:47.364688       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	
	
	==> kube-proxy [4d61a53139c246e76a1e6a68ae4a813b1ef1326fc119f299bdd77293d19fd165] <==
	I1003 18:35:42.512837       1 server_linux.go:53] "Using iptables proxy"
	I1003 18:35:42.978076       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1003 18:35:44.191026       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1003 18:35:44.191140       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1003 18:35:44.191282       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1003 18:35:44.476315       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1003 18:35:44.476443       1 server_linux.go:132] "Using iptables Proxier"
	I1003 18:35:44.532477       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1003 18:35:44.534390       1 server.go:527] "Version info" version="v1.34.1"
	I1003 18:35:44.558124       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1003 18:35:44.563893       1 config.go:106] "Starting endpoint slice config controller"
	I1003 18:35:44.563914       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1003 18:35:44.564257       1 config.go:200] "Starting service config controller"
	I1003 18:35:44.564265       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1003 18:35:44.564551       1 config.go:403] "Starting serviceCIDR config controller"
	I1003 18:35:44.564557       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1003 18:35:44.573693       1 config.go:309] "Starting node config controller"
	I1003 18:35:44.573713       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1003 18:35:44.573722       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1003 18:35:44.666898       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1003 18:35:44.666946       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1003 18:35:44.666985       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [fe8ef9d65e664a7d4da0dc52d6a7eed362c7fc8f63b1c089b8e2a6bc71c2e43f] <==
	I1003 18:36:27.347563       1 server_linux.go:53] "Using iptables proxy"
	I1003 18:36:28.233835       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1003 18:36:28.407409       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1003 18:36:28.407447       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1003 18:36:28.407544       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1003 18:36:29.800994       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1003 18:36:29.801062       1 server_linux.go:132] "Using iptables Proxier"
	I1003 18:36:29.832971       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1003 18:36:29.833297       1 server.go:527] "Version info" version="v1.34.1"
	I1003 18:36:29.833312       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1003 18:36:29.853454       1 config.go:200] "Starting service config controller"
	I1003 18:36:29.853475       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1003 18:36:29.853488       1 config.go:106] "Starting endpoint slice config controller"
	I1003 18:36:29.853493       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1003 18:36:29.853501       1 config.go:403] "Starting serviceCIDR config controller"
	I1003 18:36:29.853505       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1003 18:36:29.855224       1 config.go:309] "Starting node config controller"
	I1003 18:36:29.855232       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1003 18:36:29.855239       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1003 18:36:29.956911       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1003 18:36:29.956949       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1003 18:36:29.956962       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [6bdfc189c7e3c1ffc539e214bed8854648b71ce518b31d9897db177cef4c1258] <==
	I1003 18:36:29.638289       1 serving.go:386] Generated self-signed cert in-memory
	I1003 18:36:31.393477       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1003 18:36:31.393510       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1003 18:36:31.398257       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1003 18:36:31.398356       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1003 18:36:31.398421       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1003 18:36:31.398455       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1003 18:36:31.398506       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1003 18:36:31.398536       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1003 18:36:31.398786       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1003 18:36:31.398910       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1003 18:36:31.499328       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1003 18:36:31.499402       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1003 18:36:31.499420       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [c83aa27892436f1efa5c919e3729d68a80db430d03da0b85f90fdf2314dc16a6] <==
	I1003 18:35:41.016241       1 serving.go:386] Generated self-signed cert in-memory
	W1003 18:35:43.918508       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1003 18:35:43.918547       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1003 18:35:43.918557       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1003 18:35:43.918565       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1003 18:35:44.034654       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1003 18:35:44.034696       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1003 18:35:44.038998       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1003 18:35:44.039034       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1003 18:35:44.039955       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1003 18:35:44.040224       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1003 18:35:44.144974       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1003 18:36:11.149835       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1003 18:36:11.149877       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1003 18:36:11.149903       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1003 18:36:11.149944       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1003 18:36:11.150620       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1003 18:36:11.150650       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 03 18:44:27 functional-680560 kubelet[4013]: E1003 18:44:27.500393    4013 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-p8dhc" podUID="d42e3f3a-befa-44cc-a3b5-2d24a9a9d591"
	Oct 03 18:44:32 functional-680560 kubelet[4013]: E1003 18:44:32.501373    4013 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-9r2qn" podUID="7c5e6d68-2db7-4a04-8a4a-83a11ad767d8"
	Oct 03 18:44:40 functional-680560 kubelet[4013]: E1003 18:44:40.500622    4013 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-p8dhc" podUID="d42e3f3a-befa-44cc-a3b5-2d24a9a9d591"
	Oct 03 18:44:45 functional-680560 kubelet[4013]: E1003 18:44:45.500208    4013 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-9r2qn" podUID="7c5e6d68-2db7-4a04-8a4a-83a11ad767d8"
	Oct 03 18:44:53 functional-680560 kubelet[4013]: E1003 18:44:53.500393    4013 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-p8dhc" podUID="d42e3f3a-befa-44cc-a3b5-2d24a9a9d591"
	Oct 03 18:44:57 functional-680560 kubelet[4013]: E1003 18:44:57.500186    4013 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-9r2qn" podUID="7c5e6d68-2db7-4a04-8a4a-83a11ad767d8"
	Oct 03 18:45:08 functional-680560 kubelet[4013]: E1003 18:45:08.500319    4013 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-p8dhc" podUID="d42e3f3a-befa-44cc-a3b5-2d24a9a9d591"
	Oct 03 18:45:09 functional-680560 kubelet[4013]: E1003 18:45:09.500449    4013 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-9r2qn" podUID="7c5e6d68-2db7-4a04-8a4a-83a11ad767d8"
	Oct 03 18:45:21 functional-680560 kubelet[4013]: E1003 18:45:21.500973    4013 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-p8dhc" podUID="d42e3f3a-befa-44cc-a3b5-2d24a9a9d591"
	Oct 03 18:45:24 functional-680560 kubelet[4013]: E1003 18:45:24.500793    4013 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-9r2qn" podUID="7c5e6d68-2db7-4a04-8a4a-83a11ad767d8"
	Oct 03 18:45:32 functional-680560 kubelet[4013]: E1003 18:45:32.502570    4013 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-p8dhc" podUID="d42e3f3a-befa-44cc-a3b5-2d24a9a9d591"
	Oct 03 18:45:39 functional-680560 kubelet[4013]: E1003 18:45:39.501008    4013 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-9r2qn" podUID="7c5e6d68-2db7-4a04-8a4a-83a11ad767d8"
	Oct 03 18:45:45 functional-680560 kubelet[4013]: E1003 18:45:45.499921    4013 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-p8dhc" podUID="d42e3f3a-befa-44cc-a3b5-2d24a9a9d591"
	Oct 03 18:45:50 functional-680560 kubelet[4013]: E1003 18:45:50.500967    4013 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-9r2qn" podUID="7c5e6d68-2db7-4a04-8a4a-83a11ad767d8"
	Oct 03 18:45:58 functional-680560 kubelet[4013]: E1003 18:45:58.500703    4013 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-p8dhc" podUID="d42e3f3a-befa-44cc-a3b5-2d24a9a9d591"
	Oct 03 18:46:01 functional-680560 kubelet[4013]: E1003 18:46:01.500535    4013 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-9r2qn" podUID="7c5e6d68-2db7-4a04-8a4a-83a11ad767d8"
	Oct 03 18:46:10 functional-680560 kubelet[4013]: E1003 18:46:10.500948    4013 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-p8dhc" podUID="d42e3f3a-befa-44cc-a3b5-2d24a9a9d591"
	Oct 03 18:46:13 functional-680560 kubelet[4013]: E1003 18:46:13.500991    4013 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-9r2qn" podUID="7c5e6d68-2db7-4a04-8a4a-83a11ad767d8"
	Oct 03 18:46:21 functional-680560 kubelet[4013]: E1003 18:46:21.499966    4013 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-p8dhc" podUID="d42e3f3a-befa-44cc-a3b5-2d24a9a9d591"
	Oct 03 18:46:25 functional-680560 kubelet[4013]: E1003 18:46:25.500322    4013 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-9r2qn" podUID="7c5e6d68-2db7-4a04-8a4a-83a11ad767d8"
	Oct 03 18:46:35 functional-680560 kubelet[4013]: E1003 18:46:35.500460    4013 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-p8dhc" podUID="d42e3f3a-befa-44cc-a3b5-2d24a9a9d591"
	Oct 03 18:46:36 functional-680560 kubelet[4013]: E1003 18:46:36.500681    4013 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-9r2qn" podUID="7c5e6d68-2db7-4a04-8a4a-83a11ad767d8"
	Oct 03 18:46:50 functional-680560 kubelet[4013]: E1003 18:46:50.500930    4013 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-p8dhc" podUID="d42e3f3a-befa-44cc-a3b5-2d24a9a9d591"
	Oct 03 18:46:51 functional-680560 kubelet[4013]: E1003 18:46:51.499994    4013 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-9r2qn" podUID="7c5e6d68-2db7-4a04-8a4a-83a11ad767d8"
	Oct 03 18:47:02 functional-680560 kubelet[4013]: E1003 18:47:02.500388    4013 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-p8dhc" podUID="d42e3f3a-befa-44cc-a3b5-2d24a9a9d591"
	
	
	==> storage-provisioner [144d09669b1897c7293e709e17f66b1af9111aa4f245e6990d7b957a7fa9b219] <==
	W1003 18:46:37.685880       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:46:39.688420       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:46:39.693819       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:46:41.697674       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:46:41.704230       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:46:43.707352       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:46:43.711874       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:46:45.714788       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:46:45.718838       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:46:47.722036       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:46:47.726094       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:46:49.728991       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:46:49.733408       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:46:51.737336       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:46:51.744039       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:46:53.747498       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:46:53.752562       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:46:55.756046       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:46:55.760300       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:46:57.762847       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:46:57.767230       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:46:59.770749       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:46:59.777327       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:47:01.788375       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:47:01.798720       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [a4978a363dcb465740da6e619c39a1e3fefed7177a13113e490cb551bb32deb4] <==
	I1003 18:35:42.385895       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1003 18:35:44.189797       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1003 18:35:44.194161       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1003 18:35:44.254929       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:35:47.710498       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:35:51.971133       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:35:55.570144       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:35:58.623889       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:36:01.646166       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:36:01.651344       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1003 18:36:01.651505       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1003 18:36:01.651679       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-680560_06d4c8e0-2f5c-459e-8a78-021684de5091!
	I1003 18:36:01.652293       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"dbcdd6d8-5ab2-4173-8221-dfba19c28e99", APIVersion:"v1", ResourceVersion:"560", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-680560_06d4c8e0-2f5c-459e-8a78-021684de5091 became leader
	W1003 18:36:01.657088       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:36:01.661844       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1003 18:36:01.752172       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-680560_06d4c8e0-2f5c-459e-8a78-021684de5091!
	W1003 18:36:03.665803       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:36:03.670998       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:36:05.674633       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:36:05.682587       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:36:07.688973       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:36:07.696109       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:36:09.699963       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 18:36:09.704978       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-680560 -n functional-680560
helpers_test.go:269: (dbg) Run:  kubectl --context functional-680560 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-node-75c85bcc94-p8dhc hello-node-connect-7d85dfc575-9r2qn
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-680560 describe pod hello-node-75c85bcc94-p8dhc hello-node-connect-7d85dfc575-9r2qn
helpers_test.go:290: (dbg) kubectl --context functional-680560 describe pod hello-node-75c85bcc94-p8dhc hello-node-connect-7d85dfc575-9r2qn:

                                                
                                                
-- stdout --
	Name:             hello-node-75c85bcc94-p8dhc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-680560/192.168.49.2
	Start Time:       Fri, 03 Oct 2025 18:37:18 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fkjqm (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-fkjqm:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m46s                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-p8dhc to functional-680560
	  Normal   Pulling    6m54s (x5 over 9m46s)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m54s (x5 over 9m46s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     6m54s (x5 over 9m46s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m36s (x21 over 9m46s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m36s (x21 over 9m46s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-9r2qn
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-680560/192.168.49.2
	Start Time:       Fri, 03 Oct 2025 18:37:00 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-g8dfh (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-g8dfh:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-9r2qn to functional-680560
	  Normal   Pulling    6m56s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m56s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     6m56s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Warning  Failed     4m53s (x20 over 10m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m39s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-680560 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-680560 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-p8dhc" [d42e3f3a-befa-44cc-a3b5-2d24a9a9d591] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E1003 18:37:29.281164  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:39:45.417577  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:40:13.122774  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:44:45.417554  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-680560 -n functional-680560
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-10-03 18:47:18.731366283 +0000 UTC m=+1239.996444113
functional_test.go:1460: (dbg) Run:  kubectl --context functional-680560 describe po hello-node-75c85bcc94-p8dhc -n default
functional_test.go:1460: (dbg) kubectl --context functional-680560 describe po hello-node-75c85bcc94-p8dhc -n default:
Name:             hello-node-75c85bcc94-p8dhc
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-680560/192.168.49.2
Start Time:       Fri, 03 Oct 2025 18:37:18 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fkjqm (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-fkjqm:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-p8dhc to functional-680560
Normal   Pulling    7m8s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m8s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m8s (x5 over 10m)    kubelet            Error: ErrImagePull
Normal   BackOff    4m50s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m50s (x21 over 10m)  kubelet            Error: ImagePullBackOff
functional_test.go:1460: (dbg) Run:  kubectl --context functional-680560 logs hello-node-75c85bcc94-p8dhc -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-680560 logs hello-node-75c85bcc94-p8dhc -n default: exit status 1 (107.651714ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-p8dhc" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-680560 logs hello-node-75c85bcc94-p8dhc -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.86s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-680560 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-680560 service --namespace=default --https --url hello-node: exit status 115 (507.363837ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:31985
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-680560 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-680560 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-680560 service hello-node --url --format={{.IP}}: exit status 115 (533.091352ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-680560 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-680560 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-680560 service hello-node --url: exit status 115 (503.484056ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:31985
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-680560 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31985
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-680560 image load --daemon kicbase/echo-server:functional-680560 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-680560 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-680560" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-680560 image load --daemon kicbase/echo-server:functional-680560 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-680560 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-680560" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-680560
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-680560 image load --daemon kicbase/echo-server:functional-680560 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-680560 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-680560" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-680560 image save kicbase/echo-server:functional-680560 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-680560 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1003 18:47:32.778326  314646 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:47:32.778527  314646 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:47:32.778540  314646 out.go:374] Setting ErrFile to fd 2...
	I1003 18:47:32.778545  314646 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:47:32.778830  314646 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-284583/.minikube/bin
	I1003 18:47:32.779480  314646 config.go:182] Loaded profile config "functional-680560": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:47:32.779650  314646 config.go:182] Loaded profile config "functional-680560": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:47:32.780158  314646 cli_runner.go:164] Run: docker container inspect functional-680560 --format={{.State.Status}}
	I1003 18:47:32.798915  314646 ssh_runner.go:195] Run: systemctl --version
	I1003 18:47:32.798981  314646 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-680560
	I1003 18:47:32.816638  314646 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/functional-680560/id_rsa Username:docker}
	I1003 18:47:32.911304  314646 cache_images.go:290] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar
	W1003 18:47:32.911384  314646 cache_images.go:254] Failed to load cached images for "functional-680560": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar: no such file or directory
	I1003 18:47:32.911411  314646 cache_images.go:266] failed pushing to: functional-680560

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-680560
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-680560 image save --daemon kicbase/echo-server:functional-680560 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-680560
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-680560: exit status 1 (18.469104ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-680560

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-680560

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.36s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.57s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-679462 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p json-output-679462 --output=json --user=testUser: exit status 80 (2.572399535s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f9292fbc-eece-4a4d-a54b-7ff640b11bdd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-679462 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"0219d60f-12fc-417d-be87-bc425abef01b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-03T19:01:45Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"d1aed7ab-7e9a-4d98-97cc-1fa483a51c16","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 pause -p json-output-679462 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.57s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.6s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-679462 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 unpause -p json-output-679462 --output=json --user=testUser: exit status 80 (1.596474728s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a2e7e893-e1ee-4d01-a6ba-8d169b3269aa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-679462 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"1746c122-ffc2-4234-98ce-e3bba2013aa2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-03T19:01:47Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"89e5c4d1-b781-4c25-85b1-8d00c0252f6f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 unpause -p json-output-679462 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.60s)

                                                
                                    
x
+
TestPause/serial/Pause (6.05s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-844729 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p pause-844729 --alsologtostderr -v=5: exit status 80 (1.585133619s)

                                                
                                                
-- stdout --
	* Pausing node pause-844729 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 19:23:46.785725  446799 out.go:360] Setting OutFile to fd 1 ...
	I1003 19:23:46.787049  446799 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 19:23:46.787088  446799 out.go:374] Setting ErrFile to fd 2...
	I1003 19:23:46.787108  446799 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 19:23:46.787400  446799 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-284583/.minikube/bin
	I1003 19:23:46.787709  446799 out.go:368] Setting JSON to false
	I1003 19:23:46.787758  446799 mustload.go:65] Loading cluster: pause-844729
	I1003 19:23:46.788225  446799 config.go:182] Loaded profile config "pause-844729": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 19:23:46.788720  446799 cli_runner.go:164] Run: docker container inspect pause-844729 --format={{.State.Status}}
	I1003 19:23:46.809585  446799 host.go:66] Checking if "pause-844729" exists ...
	I1003 19:23:46.809892  446799 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 19:23:46.901044  446799 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-03 19:23:46.889456363 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1003 19:23:46.901684  446799 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-844729 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1003 19:23:46.906992  446799 out.go:179] * Pausing node pause-844729 ... 
	I1003 19:23:46.909761  446799 host.go:66] Checking if "pause-844729" exists ...
	I1003 19:23:46.910097  446799 ssh_runner.go:195] Run: systemctl --version
	I1003 19:23:46.910145  446799 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-844729
	I1003 19:23:46.942720  446799 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33393 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/pause-844729/id_rsa Username:docker}
	I1003 19:23:47.051989  446799 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 19:23:47.067296  446799 pause.go:51] kubelet running: true
	I1003 19:23:47.067441  446799 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1003 19:23:47.324110  446799 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1003 19:23:47.324203  446799 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1003 19:23:47.415990  446799 cri.go:89] found id: "b4edb0bc8b2e10ddd91a1f18e41714e9b020effe870b870ad1548c51abdd698a"
	I1003 19:23:47.416012  446799 cri.go:89] found id: "b7cace5722ba0dea6c3f841afbdc009c616b089fc76a656644b067a4f8e082ea"
	I1003 19:23:47.416018  446799 cri.go:89] found id: "da0cffc30d07d485c02b0ec61d8a9b3909ac227213b2060ee5749f2e4c309f14"
	I1003 19:23:47.416021  446799 cri.go:89] found id: "45e08f5b3c8750ac2fd35558a348abcfe4889f155ac6450a819fd64a7c7330b8"
	I1003 19:23:47.416025  446799 cri.go:89] found id: "6168e29def1182e29c0bf294c1c3d7237309f9f85b32e17a56b611beab0de0f3"
	I1003 19:23:47.416029  446799 cri.go:89] found id: "e76d5b298ebfdc13c2635e65d607a1504f98294c7e20d1bb64f2ce5a749224ef"
	I1003 19:23:47.416032  446799 cri.go:89] found id: "5bc9d928c66f715d2cb955773ff9a4ceeac2d33a54d32a1544eac9d3e61700fe"
	I1003 19:23:47.416035  446799 cri.go:89] found id: "84fa045c869f127f450bb8752bea5a8159645bcb9dc95bf2aa9c7f45b5311ca2"
	I1003 19:23:47.416038  446799 cri.go:89] found id: "5d124f9877dc3034ad8f48f78e4d24801d20c0a339bfef51da35d2994dbc8ecd"
	I1003 19:23:47.416049  446799 cri.go:89] found id: "857ea2e27fd5446162221b5717f5c41724882e4d6d67b73122cbadfde6751525"
	I1003 19:23:47.416052  446799 cri.go:89] found id: "0e24f3ce9f6cbd2fee0b930845a84383d871589f9e0d5410c93ebc0a1007c92f"
	I1003 19:23:47.416055  446799 cri.go:89] found id: "fd3fe7965793a71c3c6f9b9521b6b0c283e6b5ed6f1f5aee7fbfb482b5af6f32"
	I1003 19:23:47.416058  446799 cri.go:89] found id: "6f18ec5c83f04389f6cce9ba80e373f135129e84c9590239ca46414eb849a154"
	I1003 19:23:47.416061  446799 cri.go:89] found id: "fe077fc7b7398ab6a71e31a253a8c67d7227163b1d3d6d2ff769425cebd43420"
	I1003 19:23:47.416064  446799 cri.go:89] found id: ""
	I1003 19:23:47.416111  446799 ssh_runner.go:195] Run: sudo runc list -f json
	I1003 19:23:47.427796  446799 retry.go:31] will retry after 135.369632ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-03T19:23:47Z" level=error msg="open /run/runc: no such file or directory"
	I1003 19:23:47.564155  446799 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 19:23:47.577593  446799 pause.go:51] kubelet running: false
	I1003 19:23:47.577659  446799 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1003 19:23:47.722649  446799 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1003 19:23:47.722783  446799 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1003 19:23:47.794303  446799 cri.go:89] found id: "b4edb0bc8b2e10ddd91a1f18e41714e9b020effe870b870ad1548c51abdd698a"
	I1003 19:23:47.794326  446799 cri.go:89] found id: "b7cace5722ba0dea6c3f841afbdc009c616b089fc76a656644b067a4f8e082ea"
	I1003 19:23:47.794332  446799 cri.go:89] found id: "da0cffc30d07d485c02b0ec61d8a9b3909ac227213b2060ee5749f2e4c309f14"
	I1003 19:23:47.794336  446799 cri.go:89] found id: "45e08f5b3c8750ac2fd35558a348abcfe4889f155ac6450a819fd64a7c7330b8"
	I1003 19:23:47.794339  446799 cri.go:89] found id: "6168e29def1182e29c0bf294c1c3d7237309f9f85b32e17a56b611beab0de0f3"
	I1003 19:23:47.794343  446799 cri.go:89] found id: "e76d5b298ebfdc13c2635e65d607a1504f98294c7e20d1bb64f2ce5a749224ef"
	I1003 19:23:47.794363  446799 cri.go:89] found id: "5bc9d928c66f715d2cb955773ff9a4ceeac2d33a54d32a1544eac9d3e61700fe"
	I1003 19:23:47.794371  446799 cri.go:89] found id: "84fa045c869f127f450bb8752bea5a8159645bcb9dc95bf2aa9c7f45b5311ca2"
	I1003 19:23:47.794376  446799 cri.go:89] found id: "5d124f9877dc3034ad8f48f78e4d24801d20c0a339bfef51da35d2994dbc8ecd"
	I1003 19:23:47.794382  446799 cri.go:89] found id: "857ea2e27fd5446162221b5717f5c41724882e4d6d67b73122cbadfde6751525"
	I1003 19:23:47.794385  446799 cri.go:89] found id: "0e24f3ce9f6cbd2fee0b930845a84383d871589f9e0d5410c93ebc0a1007c92f"
	I1003 19:23:47.794388  446799 cri.go:89] found id: "fd3fe7965793a71c3c6f9b9521b6b0c283e6b5ed6f1f5aee7fbfb482b5af6f32"
	I1003 19:23:47.794391  446799 cri.go:89] found id: "6f18ec5c83f04389f6cce9ba80e373f135129e84c9590239ca46414eb849a154"
	I1003 19:23:47.794394  446799 cri.go:89] found id: "fe077fc7b7398ab6a71e31a253a8c67d7227163b1d3d6d2ff769425cebd43420"
	I1003 19:23:47.794398  446799 cri.go:89] found id: ""
	I1003 19:23:47.794457  446799 ssh_runner.go:195] Run: sudo runc list -f json
	I1003 19:23:47.805544  446799 retry.go:31] will retry after 250.260581ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-03T19:23:47Z" level=error msg="open /run/runc: no such file or directory"
	I1003 19:23:48.056020  446799 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 19:23:48.069712  446799 pause.go:51] kubelet running: false
	I1003 19:23:48.069804  446799 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1003 19:23:48.216039  446799 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1003 19:23:48.216195  446799 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1003 19:23:48.280669  446799 cri.go:89] found id: "b4edb0bc8b2e10ddd91a1f18e41714e9b020effe870b870ad1548c51abdd698a"
	I1003 19:23:48.280708  446799 cri.go:89] found id: "b7cace5722ba0dea6c3f841afbdc009c616b089fc76a656644b067a4f8e082ea"
	I1003 19:23:48.280714  446799 cri.go:89] found id: "da0cffc30d07d485c02b0ec61d8a9b3909ac227213b2060ee5749f2e4c309f14"
	I1003 19:23:48.280717  446799 cri.go:89] found id: "45e08f5b3c8750ac2fd35558a348abcfe4889f155ac6450a819fd64a7c7330b8"
	I1003 19:23:48.280749  446799 cri.go:89] found id: "6168e29def1182e29c0bf294c1c3d7237309f9f85b32e17a56b611beab0de0f3"
	I1003 19:23:48.280754  446799 cri.go:89] found id: "e76d5b298ebfdc13c2635e65d607a1504f98294c7e20d1bb64f2ce5a749224ef"
	I1003 19:23:48.280757  446799 cri.go:89] found id: "5bc9d928c66f715d2cb955773ff9a4ceeac2d33a54d32a1544eac9d3e61700fe"
	I1003 19:23:48.280760  446799 cri.go:89] found id: "84fa045c869f127f450bb8752bea5a8159645bcb9dc95bf2aa9c7f45b5311ca2"
	I1003 19:23:48.280763  446799 cri.go:89] found id: "5d124f9877dc3034ad8f48f78e4d24801d20c0a339bfef51da35d2994dbc8ecd"
	I1003 19:23:48.280769  446799 cri.go:89] found id: "857ea2e27fd5446162221b5717f5c41724882e4d6d67b73122cbadfde6751525"
	I1003 19:23:48.280776  446799 cri.go:89] found id: "0e24f3ce9f6cbd2fee0b930845a84383d871589f9e0d5410c93ebc0a1007c92f"
	I1003 19:23:48.280779  446799 cri.go:89] found id: "fd3fe7965793a71c3c6f9b9521b6b0c283e6b5ed6f1f5aee7fbfb482b5af6f32"
	I1003 19:23:48.280782  446799 cri.go:89] found id: "6f18ec5c83f04389f6cce9ba80e373f135129e84c9590239ca46414eb849a154"
	I1003 19:23:48.280788  446799 cri.go:89] found id: "fe077fc7b7398ab6a71e31a253a8c67d7227163b1d3d6d2ff769425cebd43420"
	I1003 19:23:48.280794  446799 cri.go:89] found id: ""
	I1003 19:23:48.280854  446799 ssh_runner.go:195] Run: sudo runc list -f json
	I1003 19:23:48.295334  446799 out.go:203] 
	W1003 19:23:48.298453  446799 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-03T19:23:48Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-03T19:23:48Z" level=error msg="open /run/runc: no such file or directory"
	
	W1003 19:23:48.298477  446799 out.go:285] * 
	* 
	W1003 19:23:48.305595  446799 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 19:23:48.308434  446799 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-arm64 pause -p pause-844729 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-844729
helpers_test.go:243: (dbg) docker inspect pause-844729:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cf7ab3517f2ae7a4937862ee8f7ee047bfc4b9bfc4b810b5ba6c94cbfa68c39b",
	        "Created": "2025-10-03T19:22:03.195479704Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 440675,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-03T19:22:03.257980913Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/cf7ab3517f2ae7a4937862ee8f7ee047bfc4b9bfc4b810b5ba6c94cbfa68c39b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cf7ab3517f2ae7a4937862ee8f7ee047bfc4b9bfc4b810b5ba6c94cbfa68c39b/hostname",
	        "HostsPath": "/var/lib/docker/containers/cf7ab3517f2ae7a4937862ee8f7ee047bfc4b9bfc4b810b5ba6c94cbfa68c39b/hosts",
	        "LogPath": "/var/lib/docker/containers/cf7ab3517f2ae7a4937862ee8f7ee047bfc4b9bfc4b810b5ba6c94cbfa68c39b/cf7ab3517f2ae7a4937862ee8f7ee047bfc4b9bfc4b810b5ba6c94cbfa68c39b-json.log",
	        "Name": "/pause-844729",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-844729:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-844729",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "cf7ab3517f2ae7a4937862ee8f7ee047bfc4b9bfc4b810b5ba6c94cbfa68c39b",
	                "LowerDir": "/var/lib/docker/overlay2/871bedc0b2467036df02d8ce1022320cdf3e756fab32cce3ba1f1d98f9e27236-init/diff:/var/lib/docker/overlay2/87b205803817b0b71a214d995ab7e10a92033bbf72d76d6e052f1d21ccecb313/diff",
	                "MergedDir": "/var/lib/docker/overlay2/871bedc0b2467036df02d8ce1022320cdf3e756fab32cce3ba1f1d98f9e27236/merged",
	                "UpperDir": "/var/lib/docker/overlay2/871bedc0b2467036df02d8ce1022320cdf3e756fab32cce3ba1f1d98f9e27236/diff",
	                "WorkDir": "/var/lib/docker/overlay2/871bedc0b2467036df02d8ce1022320cdf3e756fab32cce3ba1f1d98f9e27236/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-844729",
	                "Source": "/var/lib/docker/volumes/pause-844729/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-844729",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-844729",
	                "name.minikube.sigs.k8s.io": "pause-844729",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b44380d16e7c66063187e333169732332b7b40d6df7765ffcbe77905fb69a74e",
	            "SandboxKey": "/var/run/docker/netns/b44380d16e7c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33393"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33394"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33397"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33395"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33396"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-844729": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9e:d2:ab:af:fa:f4",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "153e463ae4daea096ce512cd0e3e6b4feb726d8b0603650996676d765451008a",
	                    "EndpointID": "b38d57ee71c71fb502cdc51842b3532b71f3ecea7ac38ef020b07be637cff560",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-844729",
	                        "cf7ab3517f2a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-844729 -n pause-844729
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-844729 -n pause-844729: exit status 2 (337.012689ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-844729 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-844729 logs -n 25: (1.396711822s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-929800 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                    │ NoKubernetes-929800       │ jenkins │ v1.37.0 │ 03 Oct 25 19:17 UTC │ 03 Oct 25 19:18 UTC │
	│ start   │ -p missing-upgrade-546147 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-546147    │ jenkins │ v1.32.0 │ 03 Oct 25 19:17 UTC │ 03 Oct 25 19:18 UTC │
	│ start   │ -p NoKubernetes-929800 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-929800       │ jenkins │ v1.37.0 │ 03 Oct 25 19:18 UTC │ 03 Oct 25 19:19 UTC │
	│ start   │ -p missing-upgrade-546147 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-546147    │ jenkins │ v1.37.0 │ 03 Oct 25 19:19 UTC │ 03 Oct 25 19:19 UTC │
	│ delete  │ -p NoKubernetes-929800                                                                                                                   │ NoKubernetes-929800       │ jenkins │ v1.37.0 │ 03 Oct 25 19:19 UTC │ 03 Oct 25 19:19 UTC │
	│ start   │ -p NoKubernetes-929800 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-929800       │ jenkins │ v1.37.0 │ 03 Oct 25 19:19 UTC │ 03 Oct 25 19:19 UTC │
	│ ssh     │ -p NoKubernetes-929800 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-929800       │ jenkins │ v1.37.0 │ 03 Oct 25 19:19 UTC │                     │
	│ stop    │ -p NoKubernetes-929800                                                                                                                   │ NoKubernetes-929800       │ jenkins │ v1.37.0 │ 03 Oct 25 19:19 UTC │ 03 Oct 25 19:19 UTC │
	│ start   │ -p NoKubernetes-929800 --driver=docker  --container-runtime=crio                                                                         │ NoKubernetes-929800       │ jenkins │ v1.37.0 │ 03 Oct 25 19:19 UTC │ 03 Oct 25 19:19 UTC │
	│ ssh     │ -p NoKubernetes-929800 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-929800       │ jenkins │ v1.37.0 │ 03 Oct 25 19:19 UTC │                     │
	│ delete  │ -p NoKubernetes-929800                                                                                                                   │ NoKubernetes-929800       │ jenkins │ v1.37.0 │ 03 Oct 25 19:19 UTC │ 03 Oct 25 19:19 UTC │
	│ start   │ -p kubernetes-upgrade-629875 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-629875 │ jenkins │ v1.37.0 │ 03 Oct 25 19:19 UTC │ 03 Oct 25 19:20 UTC │
	│ delete  │ -p missing-upgrade-546147                                                                                                                │ missing-upgrade-546147    │ jenkins │ v1.37.0 │ 03 Oct 25 19:19 UTC │ 03 Oct 25 19:19 UTC │
	│ start   │ -p stopped-upgrade-414530 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-414530    │ jenkins │ v1.32.0 │ 03 Oct 25 19:19 UTC │ 03 Oct 25 19:20 UTC │
	│ stop    │ -p kubernetes-upgrade-629875                                                                                                             │ kubernetes-upgrade-629875 │ jenkins │ v1.37.0 │ 03 Oct 25 19:20 UTC │ 03 Oct 25 19:20 UTC │
	│ start   │ -p kubernetes-upgrade-629875 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-629875 │ jenkins │ v1.37.0 │ 03 Oct 25 19:20 UTC │                     │
	│ stop    │ stopped-upgrade-414530 stop                                                                                                              │ stopped-upgrade-414530    │ jenkins │ v1.32.0 │ 03 Oct 25 19:20 UTC │ 03 Oct 25 19:20 UTC │
	│ start   │ -p stopped-upgrade-414530 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-414530    │ jenkins │ v1.37.0 │ 03 Oct 25 19:20 UTC │ 03 Oct 25 19:20 UTC │
	│ delete  │ -p stopped-upgrade-414530                                                                                                                │ stopped-upgrade-414530    │ jenkins │ v1.37.0 │ 03 Oct 25 19:20 UTC │ 03 Oct 25 19:21 UTC │
	│ start   │ -p running-upgrade-024862 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ running-upgrade-024862    │ jenkins │ v1.32.0 │ 03 Oct 25 19:21 UTC │ 03 Oct 25 19:21 UTC │
	│ start   │ -p running-upgrade-024862 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ running-upgrade-024862    │ jenkins │ v1.37.0 │ 03 Oct 25 19:21 UTC │ 03 Oct 25 19:21 UTC │
	│ delete  │ -p running-upgrade-024862                                                                                                                │ running-upgrade-024862    │ jenkins │ v1.37.0 │ 03 Oct 25 19:21 UTC │ 03 Oct 25 19:21 UTC │
	│ start   │ -p pause-844729 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-844729              │ jenkins │ v1.37.0 │ 03 Oct 25 19:21 UTC │ 03 Oct 25 19:23 UTC │
	│ start   │ -p pause-844729 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-844729              │ jenkins │ v1.37.0 │ 03 Oct 25 19:23 UTC │ 03 Oct 25 19:23 UTC │
	│ pause   │ -p pause-844729 --alsologtostderr -v=5                                                                                                   │ pause-844729              │ jenkins │ v1.37.0 │ 03 Oct 25 19:23 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/03 19:23:22
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 19:23:22.033066  444809 out.go:360] Setting OutFile to fd 1 ...
	I1003 19:23:22.033250  444809 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 19:23:22.033261  444809 out.go:374] Setting ErrFile to fd 2...
	I1003 19:23:22.033267  444809 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 19:23:22.033536  444809 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-284583/.minikube/bin
	I1003 19:23:22.033936  444809 out.go:368] Setting JSON to false
	I1003 19:23:22.034984  444809 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7553,"bootTime":1759511849,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1003 19:23:22.035065  444809 start.go:140] virtualization:  
	I1003 19:23:22.040219  444809 out.go:179] * [pause-844729] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1003 19:23:22.043386  444809 out.go:179]   - MINIKUBE_LOCATION=21625
	I1003 19:23:22.043432  444809 notify.go:220] Checking for updates...
	I1003 19:23:22.046466  444809 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 19:23:22.049438  444809 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21625-284583/kubeconfig
	I1003 19:23:22.052277  444809 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-284583/.minikube
	I1003 19:23:22.055714  444809 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1003 19:23:22.058728  444809 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 19:23:22.062457  444809 config.go:182] Loaded profile config "pause-844729": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 19:23:22.063052  444809 driver.go:421] Setting default libvirt URI to qemu:///system
	I1003 19:23:22.088856  444809 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1003 19:23:22.088973  444809 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 19:23:22.161081  444809 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-03 19:23:22.150870956 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1003 19:23:22.161194  444809 docker.go:318] overlay module found
	I1003 19:23:22.164385  444809 out.go:179] * Using the docker driver based on existing profile
	I1003 19:23:22.167204  444809 start.go:304] selected driver: docker
	I1003 19:23:22.167226  444809 start.go:924] validating driver "docker" against &{Name:pause-844729 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-844729 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 19:23:22.167368  444809 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 19:23:22.167488  444809 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 19:23:22.225202  444809 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-03 19:23:22.216482512 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1003 19:23:22.225617  444809 cni.go:84] Creating CNI manager for ""
	I1003 19:23:22.225682  444809 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 19:23:22.225733  444809 start.go:348] cluster config:
	{Name:pause-844729 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-844729 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 19:23:22.228804  444809 out.go:179] * Starting "pause-844729" primary control-plane node in "pause-844729" cluster
	I1003 19:23:22.231563  444809 cache.go:123] Beginning downloading kic base image for docker with crio
	I1003 19:23:22.234455  444809 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1003 19:23:22.237418  444809 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 19:23:22.237500  444809 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1003 19:23:22.237511  444809 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21625-284583/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1003 19:23:22.237673  444809 cache.go:58] Caching tarball of preloaded images
	I1003 19:23:22.237776  444809 preload.go:233] Found /home/jenkins/minikube-integration/21625-284583/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1003 19:23:22.237786  444809 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1003 19:23:22.237940  444809 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/pause-844729/config.json ...
	I1003 19:23:22.258016  444809 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1003 19:23:22.258039  444809 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1003 19:23:22.258052  444809 cache.go:232] Successfully downloaded all kic artifacts
	I1003 19:23:22.258077  444809 start.go:360] acquireMachinesLock for pause-844729: {Name:mk018320e2700ef01919004e8c23ac2ff4cc641e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 19:23:22.258140  444809 start.go:364] duration metric: took 37.367µs to acquireMachinesLock for "pause-844729"
	I1003 19:23:22.258163  444809 start.go:96] Skipping create...Using existing machine configuration
	I1003 19:23:22.258173  444809 fix.go:54] fixHost starting: 
	I1003 19:23:22.258441  444809 cli_runner.go:164] Run: docker container inspect pause-844729 --format={{.State.Status}}
	I1003 19:23:22.281538  444809 fix.go:112] recreateIfNeeded on pause-844729: state=Running err=<nil>
	W1003 19:23:22.281572  444809 fix.go:138] unexpected machine state, will restart: <nil>
	I1003 19:23:22.713958  432533 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1003 19:23:22.714335  432533 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1003 19:23:22.714375  432533 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 19:23:22.714435  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 19:23:22.740718  432533 cri.go:89] found id: "04bedbd6c6d1a6948852e8a02d927b6d181e1cdd8b926cadb512e6c5a9e2bc18"
	I1003 19:23:22.740764  432533 cri.go:89] found id: ""
	I1003 19:23:22.740773  432533 logs.go:282] 1 containers: [04bedbd6c6d1a6948852e8a02d927b6d181e1cdd8b926cadb512e6c5a9e2bc18]
	I1003 19:23:22.740830  432533 ssh_runner.go:195] Run: which crictl
	I1003 19:23:22.744518  432533 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 19:23:22.744590  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 19:23:22.788943  432533 cri.go:89] found id: ""
	I1003 19:23:22.788967  432533 logs.go:282] 0 containers: []
	W1003 19:23:22.788975  432533 logs.go:284] No container was found matching "etcd"
	I1003 19:23:22.788982  432533 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 19:23:22.789041  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 19:23:22.828675  432533 cri.go:89] found id: ""
	I1003 19:23:22.828704  432533 logs.go:282] 0 containers: []
	W1003 19:23:22.828713  432533 logs.go:284] No container was found matching "coredns"
	I1003 19:23:22.828719  432533 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 19:23:22.828798  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 19:23:22.860516  432533 cri.go:89] found id: "dc5ccb1606b8053522f591bfece54cc0b244422b0fc5af82da81d2215cabb3a1"
	I1003 19:23:22.860542  432533 cri.go:89] found id: ""
	I1003 19:23:22.860550  432533 logs.go:282] 1 containers: [dc5ccb1606b8053522f591bfece54cc0b244422b0fc5af82da81d2215cabb3a1]
	I1003 19:23:22.860603  432533 ssh_runner.go:195] Run: which crictl
	I1003 19:23:22.865945  432533 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 19:23:22.866012  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 19:23:22.903975  432533 cri.go:89] found id: ""
	I1003 19:23:22.903996  432533 logs.go:282] 0 containers: []
	W1003 19:23:22.904004  432533 logs.go:284] No container was found matching "kube-proxy"
	I1003 19:23:22.904011  432533 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 19:23:22.904067  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 19:23:22.936648  432533 cri.go:89] found id: "c62c95f3827d9847bcb196dcef4991555aba77e95ba4e6a5900a14faf7679b30"
	I1003 19:23:22.936668  432533 cri.go:89] found id: ""
	I1003 19:23:22.936676  432533 logs.go:282] 1 containers: [c62c95f3827d9847bcb196dcef4991555aba77e95ba4e6a5900a14faf7679b30]
	I1003 19:23:22.936747  432533 ssh_runner.go:195] Run: which crictl
	I1003 19:23:22.941753  432533 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 19:23:22.941822  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 19:23:22.973122  432533 cri.go:89] found id: ""
	I1003 19:23:22.973145  432533 logs.go:282] 0 containers: []
	W1003 19:23:22.973154  432533 logs.go:284] No container was found matching "kindnet"
	I1003 19:23:22.973161  432533 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1003 19:23:22.973216  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1003 19:23:23.004326  432533 cri.go:89] found id: ""
	I1003 19:23:23.004357  432533 logs.go:282] 0 containers: []
	W1003 19:23:23.004366  432533 logs.go:284] No container was found matching "storage-provisioner"
	I1003 19:23:23.004381  432533 logs.go:123] Gathering logs for CRI-O ...
	I1003 19:23:23.004392  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 19:23:23.098905  432533 logs.go:123] Gathering logs for container status ...
	I1003 19:23:23.098971  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 19:23:23.159167  432533 logs.go:123] Gathering logs for kubelet ...
	I1003 19:23:23.159194  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 19:23:23.309441  432533 logs.go:123] Gathering logs for dmesg ...
	I1003 19:23:23.309477  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 19:23:23.333940  432533 logs.go:123] Gathering logs for describe nodes ...
	I1003 19:23:23.334215  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 19:23:23.409300  432533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 19:23:23.409325  432533 logs.go:123] Gathering logs for kube-apiserver [04bedbd6c6d1a6948852e8a02d927b6d181e1cdd8b926cadb512e6c5a9e2bc18] ...
	I1003 19:23:23.409338  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 04bedbd6c6d1a6948852e8a02d927b6d181e1cdd8b926cadb512e6c5a9e2bc18"
	I1003 19:23:23.446864  432533 logs.go:123] Gathering logs for kube-scheduler [dc5ccb1606b8053522f591bfece54cc0b244422b0fc5af82da81d2215cabb3a1] ...
	I1003 19:23:23.446935  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dc5ccb1606b8053522f591bfece54cc0b244422b0fc5af82da81d2215cabb3a1"
	I1003 19:23:23.512406  432533 logs.go:123] Gathering logs for kube-controller-manager [c62c95f3827d9847bcb196dcef4991555aba77e95ba4e6a5900a14faf7679b30] ...
	I1003 19:23:23.512443  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c62c95f3827d9847bcb196dcef4991555aba77e95ba4e6a5900a14faf7679b30"
	I1003 19:23:26.041309  432533 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1003 19:23:26.041808  432533 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1003 19:23:26.041868  432533 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 19:23:26.041933  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 19:23:26.074274  432533 cri.go:89] found id: "04bedbd6c6d1a6948852e8a02d927b6d181e1cdd8b926cadb512e6c5a9e2bc18"
	I1003 19:23:26.074298  432533 cri.go:89] found id: ""
	I1003 19:23:26.074308  432533 logs.go:282] 1 containers: [04bedbd6c6d1a6948852e8a02d927b6d181e1cdd8b926cadb512e6c5a9e2bc18]
	I1003 19:23:26.074378  432533 ssh_runner.go:195] Run: which crictl
	I1003 19:23:26.078232  432533 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 19:23:26.078305  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 19:23:26.112453  432533 cri.go:89] found id: ""
	I1003 19:23:26.112518  432533 logs.go:282] 0 containers: []
	W1003 19:23:26.112538  432533 logs.go:284] No container was found matching "etcd"
	I1003 19:23:26.112560  432533 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 19:23:26.112647  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 19:23:26.139341  432533 cri.go:89] found id: ""
	I1003 19:23:26.139363  432533 logs.go:282] 0 containers: []
	W1003 19:23:26.139371  432533 logs.go:284] No container was found matching "coredns"
	I1003 19:23:26.139378  432533 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 19:23:26.139439  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 19:23:26.166984  432533 cri.go:89] found id: "dc5ccb1606b8053522f591bfece54cc0b244422b0fc5af82da81d2215cabb3a1"
	I1003 19:23:26.167007  432533 cri.go:89] found id: ""
	I1003 19:23:26.167016  432533 logs.go:282] 1 containers: [dc5ccb1606b8053522f591bfece54cc0b244422b0fc5af82da81d2215cabb3a1]
	I1003 19:23:26.167102  432533 ssh_runner.go:195] Run: which crictl
	I1003 19:23:26.171212  432533 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 19:23:26.171309  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 19:23:26.197654  432533 cri.go:89] found id: ""
	I1003 19:23:26.197679  432533 logs.go:282] 0 containers: []
	W1003 19:23:26.197688  432533 logs.go:284] No container was found matching "kube-proxy"
	I1003 19:23:26.197695  432533 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 19:23:26.197751  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 19:23:26.223438  432533 cri.go:89] found id: "c62c95f3827d9847bcb196dcef4991555aba77e95ba4e6a5900a14faf7679b30"
	I1003 19:23:26.223461  432533 cri.go:89] found id: ""
	I1003 19:23:26.223470  432533 logs.go:282] 1 containers: [c62c95f3827d9847bcb196dcef4991555aba77e95ba4e6a5900a14faf7679b30]
	I1003 19:23:26.223526  432533 ssh_runner.go:195] Run: which crictl
	I1003 19:23:26.227564  432533 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 19:23:26.227633  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 19:23:26.253982  432533 cri.go:89] found id: ""
	I1003 19:23:26.254061  432533 logs.go:282] 0 containers: []
	W1003 19:23:26.254076  432533 logs.go:284] No container was found matching "kindnet"
	I1003 19:23:26.254084  432533 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1003 19:23:26.254148  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1003 19:23:26.279329  432533 cri.go:89] found id: ""
	I1003 19:23:26.279352  432533 logs.go:282] 0 containers: []
	W1003 19:23:26.279361  432533 logs.go:284] No container was found matching "storage-provisioner"
	I1003 19:23:26.279372  432533 logs.go:123] Gathering logs for describe nodes ...
	I1003 19:23:26.279383  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 19:23:22.284802  444809 out.go:252] * Updating the running docker "pause-844729" container ...
	I1003 19:23:22.284835  444809 machine.go:93] provisionDockerMachine start ...
	I1003 19:23:22.284913  444809 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-844729
	I1003 19:23:22.303011  444809 main.go:141] libmachine: Using SSH client type: native
	I1003 19:23:22.303336  444809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33393 <nil> <nil>}
	I1003 19:23:22.303351  444809 main.go:141] libmachine: About to run SSH command:
	hostname
	I1003 19:23:22.436262  444809 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-844729
	
	I1003 19:23:22.436294  444809 ubuntu.go:182] provisioning hostname "pause-844729"
	I1003 19:23:22.436355  444809 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-844729
	I1003 19:23:22.454966  444809 main.go:141] libmachine: Using SSH client type: native
	I1003 19:23:22.455280  444809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33393 <nil> <nil>}
	I1003 19:23:22.455294  444809 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-844729 && echo "pause-844729" | sudo tee /etc/hostname
	I1003 19:23:22.597723  444809 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-844729
	
	I1003 19:23:22.597874  444809 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-844729
	I1003 19:23:22.616212  444809 main.go:141] libmachine: Using SSH client type: native
	I1003 19:23:22.616547  444809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33393 <nil> <nil>}
	I1003 19:23:22.616563  444809 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-844729' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-844729/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-844729' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 19:23:22.757706  444809 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 19:23:22.757789  444809 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21625-284583/.minikube CaCertPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21625-284583/.minikube}
	I1003 19:23:22.757848  444809 ubuntu.go:190] setting up certificates
	I1003 19:23:22.757877  444809 provision.go:84] configureAuth start
	I1003 19:23:22.757966  444809 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-844729
	I1003 19:23:22.779781  444809 provision.go:143] copyHostCerts
	I1003 19:23:22.779847  444809 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-284583/.minikube/ca.pem, removing ...
	I1003 19:23:22.779864  444809 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-284583/.minikube/ca.pem
	I1003 19:23:22.779959  444809 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21625-284583/.minikube/ca.pem (1082 bytes)
	I1003 19:23:22.780067  444809 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-284583/.minikube/cert.pem, removing ...
	I1003 19:23:22.780074  444809 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-284583/.minikube/cert.pem
	I1003 19:23:22.780113  444809 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21625-284583/.minikube/cert.pem (1123 bytes)
	I1003 19:23:22.780175  444809 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-284583/.minikube/key.pem, removing ...
	I1003 19:23:22.780180  444809 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-284583/.minikube/key.pem
	I1003 19:23:22.780202  444809 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21625-284583/.minikube/key.pem (1675 bytes)
	I1003 19:23:22.780246  444809 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21625-284583/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca-key.pem org=jenkins.pause-844729 san=[127.0.0.1 192.168.76.2 localhost minikube pause-844729]
	I1003 19:23:22.856374  444809 provision.go:177] copyRemoteCerts
	I1003 19:23:22.856447  444809 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 19:23:22.856492  444809 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-844729
	I1003 19:23:22.882359  444809 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33393 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/pause-844729/id_rsa Username:docker}
	I1003 19:23:22.988979  444809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 19:23:23.012714  444809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1003 19:23:23.038510  444809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1003 19:23:23.062449  444809 provision.go:87] duration metric: took 304.536804ms to configureAuth
	I1003 19:23:23.062516  444809 ubuntu.go:206] setting minikube options for container-runtime
	I1003 19:23:23.062752  444809 config.go:182] Loaded profile config "pause-844729": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 19:23:23.062884  444809 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-844729
	I1003 19:23:23.094442  444809 main.go:141] libmachine: Using SSH client type: native
	I1003 19:23:23.094740  444809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33393 <nil> <nil>}
	I1003 19:23:23.094755  444809 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1003 19:23:28.448152  444809 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1003 19:23:28.448174  444809 machine.go:96] duration metric: took 6.163330744s to provisionDockerMachine
	I1003 19:23:28.448184  444809 start.go:293] postStartSetup for "pause-844729" (driver="docker")
	I1003 19:23:28.448195  444809 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 19:23:28.448254  444809 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 19:23:28.448296  444809 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-844729
	I1003 19:23:28.466757  444809 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33393 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/pause-844729/id_rsa Username:docker}
	I1003 19:23:28.564834  444809 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 19:23:28.568490  444809 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1003 19:23:28.568517  444809 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1003 19:23:28.568528  444809 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-284583/.minikube/addons for local assets ...
	I1003 19:23:28.568606  444809 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-284583/.minikube/files for local assets ...
	I1003 19:23:28.568764  444809 filesync.go:149] local asset: /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem -> 2864342.pem in /etc/ssl/certs
	I1003 19:23:28.568885  444809 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 19:23:28.576606  444809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem --> /etc/ssl/certs/2864342.pem (1708 bytes)
	I1003 19:23:28.595345  444809 start.go:296] duration metric: took 147.14596ms for postStartSetup
	I1003 19:23:28.595449  444809 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 19:23:28.595496  444809 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-844729
	I1003 19:23:28.612849  444809 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33393 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/pause-844729/id_rsa Username:docker}
	I1003 19:23:28.706217  444809 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1003 19:23:28.711532  444809 fix.go:56] duration metric: took 6.453351887s for fixHost
	I1003 19:23:28.711557  444809 start.go:83] releasing machines lock for "pause-844729", held for 6.453404401s
	I1003 19:23:28.711627  444809 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-844729
	I1003 19:23:28.728457  444809 ssh_runner.go:195] Run: cat /version.json
	I1003 19:23:28.728516  444809 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-844729
	I1003 19:23:28.728568  444809 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 19:23:28.728618  444809 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-844729
	I1003 19:23:28.750271  444809 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33393 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/pause-844729/id_rsa Username:docker}
	I1003 19:23:28.752710  444809 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33393 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/pause-844729/id_rsa Username:docker}
	I1003 19:23:28.931589  444809 ssh_runner.go:195] Run: systemctl --version
	I1003 19:23:28.938369  444809 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1003 19:23:28.980055  444809 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1003 19:23:28.984623  444809 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 19:23:28.984776  444809 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 19:23:28.993632  444809 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1003 19:23:28.993660  444809 start.go:495] detecting cgroup driver to use...
	I1003 19:23:28.993706  444809 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1003 19:23:28.993757  444809 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 19:23:29.009689  444809 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 19:23:29.023549  444809 docker.go:218] disabling cri-docker service (if available) ...
	I1003 19:23:29.023658  444809 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1003 19:23:29.040348  444809 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1003 19:23:29.055435  444809 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1003 19:23:29.207437  444809 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1003 19:23:29.389654  444809 docker.go:234] disabling docker service ...
	I1003 19:23:29.389757  444809 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1003 19:23:29.407786  444809 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1003 19:23:29.422740  444809 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1003 19:23:29.593207  444809 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1003 19:23:29.780047  444809 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1003 19:23:29.796287  444809 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 19:23:29.818369  444809 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1003 19:23:29.818466  444809 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:23:29.827915  444809 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1003 19:23:29.828026  444809 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:23:29.838355  444809 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:23:29.847682  444809 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:23:29.857892  444809 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 19:23:29.866906  444809 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:23:29.876275  444809 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:23:29.885681  444809 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:23:29.895326  444809 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 19:23:29.903702  444809 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 19:23:29.912679  444809 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 19:23:30.117055  444809 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1003 19:23:30.313038  444809 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1003 19:23:30.313146  444809 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1003 19:23:30.318062  444809 start.go:563] Will wait 60s for crictl version
	I1003 19:23:30.318184  444809 ssh_runner.go:195] Run: which crictl
	I1003 19:23:30.322157  444809 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1003 19:23:30.347079  444809 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1003 19:23:30.347186  444809 ssh_runner.go:195] Run: crio --version
	I1003 19:23:30.380606  444809 ssh_runner.go:195] Run: crio --version
	I1003 19:23:30.416056  444809 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1003 19:23:26.344626  432533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 19:23:26.344646  432533 logs.go:123] Gathering logs for kube-apiserver [04bedbd6c6d1a6948852e8a02d927b6d181e1cdd8b926cadb512e6c5a9e2bc18] ...
	I1003 19:23:26.344659  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 04bedbd6c6d1a6948852e8a02d927b6d181e1cdd8b926cadb512e6c5a9e2bc18"
	I1003 19:23:26.378186  432533 logs.go:123] Gathering logs for kube-scheduler [dc5ccb1606b8053522f591bfece54cc0b244422b0fc5af82da81d2215cabb3a1] ...
	I1003 19:23:26.378265  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dc5ccb1606b8053522f591bfece54cc0b244422b0fc5af82da81d2215cabb3a1"
	I1003 19:23:26.433575  432533 logs.go:123] Gathering logs for kube-controller-manager [c62c95f3827d9847bcb196dcef4991555aba77e95ba4e6a5900a14faf7679b30] ...
	I1003 19:23:26.433620  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c62c95f3827d9847bcb196dcef4991555aba77e95ba4e6a5900a14faf7679b30"
	I1003 19:23:26.459572  432533 logs.go:123] Gathering logs for CRI-O ...
	I1003 19:23:26.459600  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 19:23:26.519115  432533 logs.go:123] Gathering logs for container status ...
	I1003 19:23:26.519150  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 19:23:26.551327  432533 logs.go:123] Gathering logs for kubelet ...
	I1003 19:23:26.551356  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 19:23:26.665675  432533 logs.go:123] Gathering logs for dmesg ...
	I1003 19:23:26.665718  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 19:23:29.184584  432533 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1003 19:23:29.185015  432533 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1003 19:23:29.185068  432533 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 19:23:29.185121  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 19:23:29.217379  432533 cri.go:89] found id: "04bedbd6c6d1a6948852e8a02d927b6d181e1cdd8b926cadb512e6c5a9e2bc18"
	I1003 19:23:29.217398  432533 cri.go:89] found id: ""
	I1003 19:23:29.217406  432533 logs.go:282] 1 containers: [04bedbd6c6d1a6948852e8a02d927b6d181e1cdd8b926cadb512e6c5a9e2bc18]
	I1003 19:23:29.217462  432533 ssh_runner.go:195] Run: which crictl
	I1003 19:23:29.222192  432533 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 19:23:29.222271  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 19:23:29.253807  432533 cri.go:89] found id: ""
	I1003 19:23:29.253828  432533 logs.go:282] 0 containers: []
	W1003 19:23:29.253836  432533 logs.go:284] No container was found matching "etcd"
	I1003 19:23:29.253842  432533 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 19:23:29.253912  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 19:23:29.319033  432533 cri.go:89] found id: ""
	I1003 19:23:29.319061  432533 logs.go:282] 0 containers: []
	W1003 19:23:29.319070  432533 logs.go:284] No container was found matching "coredns"
	I1003 19:23:29.319076  432533 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 19:23:29.319130  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 19:23:29.352108  432533 cri.go:89] found id: "dc5ccb1606b8053522f591bfece54cc0b244422b0fc5af82da81d2215cabb3a1"
	I1003 19:23:29.352126  432533 cri.go:89] found id: ""
	I1003 19:23:29.352134  432533 logs.go:282] 1 containers: [dc5ccb1606b8053522f591bfece54cc0b244422b0fc5af82da81d2215cabb3a1]
	I1003 19:23:29.352206  432533 ssh_runner.go:195] Run: which crictl
	I1003 19:23:29.356568  432533 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 19:23:29.356642  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 19:23:29.397498  432533 cri.go:89] found id: ""
	I1003 19:23:29.397519  432533 logs.go:282] 0 containers: []
	W1003 19:23:29.397533  432533 logs.go:284] No container was found matching "kube-proxy"
	I1003 19:23:29.397540  432533 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 19:23:29.397597  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 19:23:29.434151  432533 cri.go:89] found id: "c62c95f3827d9847bcb196dcef4991555aba77e95ba4e6a5900a14faf7679b30"
	I1003 19:23:29.434225  432533 cri.go:89] found id: ""
	I1003 19:23:29.434249  432533 logs.go:282] 1 containers: [c62c95f3827d9847bcb196dcef4991555aba77e95ba4e6a5900a14faf7679b30]
	I1003 19:23:29.434319  432533 ssh_runner.go:195] Run: which crictl
	I1003 19:23:29.438661  432533 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 19:23:29.438739  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 19:23:29.485725  432533 cri.go:89] found id: ""
	I1003 19:23:29.485789  432533 logs.go:282] 0 containers: []
	W1003 19:23:29.485811  432533 logs.go:284] No container was found matching "kindnet"
	I1003 19:23:29.485844  432533 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1003 19:23:29.485924  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1003 19:23:29.522585  432533 cri.go:89] found id: ""
	I1003 19:23:29.522671  432533 logs.go:282] 0 containers: []
	W1003 19:23:29.522696  432533 logs.go:284] No container was found matching "storage-provisioner"
	I1003 19:23:29.522730  432533 logs.go:123] Gathering logs for kube-controller-manager [c62c95f3827d9847bcb196dcef4991555aba77e95ba4e6a5900a14faf7679b30] ...
	I1003 19:23:29.522760  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c62c95f3827d9847bcb196dcef4991555aba77e95ba4e6a5900a14faf7679b30"
	I1003 19:23:29.571044  432533 logs.go:123] Gathering logs for CRI-O ...
	I1003 19:23:29.571127  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 19:23:29.635213  432533 logs.go:123] Gathering logs for container status ...
	I1003 19:23:29.635291  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 19:23:29.676858  432533 logs.go:123] Gathering logs for kubelet ...
	I1003 19:23:29.676927  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 19:23:29.800583  432533 logs.go:123] Gathering logs for dmesg ...
	I1003 19:23:29.800648  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 19:23:29.817324  432533 logs.go:123] Gathering logs for describe nodes ...
	I1003 19:23:29.817474  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 19:23:29.912526  432533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 19:23:29.912605  432533 logs.go:123] Gathering logs for kube-apiserver [04bedbd6c6d1a6948852e8a02d927b6d181e1cdd8b926cadb512e6c5a9e2bc18] ...
	I1003 19:23:29.912810  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 04bedbd6c6d1a6948852e8a02d927b6d181e1cdd8b926cadb512e6c5a9e2bc18"
	I1003 19:23:29.958206  432533 logs.go:123] Gathering logs for kube-scheduler [dc5ccb1606b8053522f591bfece54cc0b244422b0fc5af82da81d2215cabb3a1] ...
	I1003 19:23:29.958239  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dc5ccb1606b8053522f591bfece54cc0b244422b0fc5af82da81d2215cabb3a1"
	I1003 19:23:30.418944  444809 cli_runner.go:164] Run: docker network inspect pause-844729 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 19:23:30.434771  444809 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1003 19:23:30.438704  444809 kubeadm.go:883] updating cluster {Name:pause-844729 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-844729 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1003 19:23:30.438833  444809 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 19:23:30.438884  444809 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 19:23:30.472115  444809 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 19:23:30.472141  444809 crio.go:433] Images already preloaded, skipping extraction
	I1003 19:23:30.472195  444809 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 19:23:30.497303  444809 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 19:23:30.497326  444809 cache_images.go:85] Images are preloaded, skipping loading
	I1003 19:23:30.497334  444809 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1003 19:23:30.497448  444809 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-844729 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-844729 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1003 19:23:30.497529  444809 ssh_runner.go:195] Run: crio config
	I1003 19:23:30.568383  444809 cni.go:84] Creating CNI manager for ""
	I1003 19:23:30.568459  444809 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 19:23:30.568490  444809 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1003 19:23:30.568537  444809 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-844729 NodeName:pause-844729 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1003 19:23:30.568707  444809 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-844729"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 19:23:30.568842  444809 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1003 19:23:30.579556  444809 binaries.go:44] Found k8s binaries, skipping transfer
	I1003 19:23:30.579653  444809 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1003 19:23:30.587738  444809 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1003 19:23:30.600325  444809 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 19:23:30.613471  444809 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1003 19:23:30.626741  444809 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1003 19:23:30.630607  444809 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 19:23:30.765830  444809 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 19:23:30.779527  444809 certs.go:69] Setting up /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/pause-844729 for IP: 192.168.76.2
	I1003 19:23:30.779549  444809 certs.go:195] generating shared ca certs ...
	I1003 19:23:30.779565  444809 certs.go:227] acquiring lock for ca certs: {Name:mk5a10e6c921326e9c211447576eaeb893259ba7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:23:30.779750  444809 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21625-284583/.minikube/ca.key
	I1003 19:23:30.779811  444809 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.key
	I1003 19:23:30.779827  444809 certs.go:257] generating profile certs ...
	I1003 19:23:30.779950  444809 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/pause-844729/client.key
	I1003 19:23:30.780063  444809 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/pause-844729/apiserver.key.62249f20
	I1003 19:23:30.780141  444809 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/pause-844729/proxy-client.key
	I1003 19:23:30.780294  444809 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/286434.pem (1338 bytes)
	W1003 19:23:30.780350  444809 certs.go:480] ignoring /home/jenkins/minikube-integration/21625-284583/.minikube/certs/286434_empty.pem, impossibly tiny 0 bytes
	I1003 19:23:30.780366  444809 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 19:23:30.780395  444809 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem (1082 bytes)
	I1003 19:23:30.780452  444809 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem (1123 bytes)
	I1003 19:23:30.780485  444809 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/key.pem (1675 bytes)
	I1003 19:23:30.780564  444809 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem (1708 bytes)
	I1003 19:23:30.781265  444809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 19:23:30.800060  444809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1003 19:23:30.817096  444809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 19:23:30.834124  444809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1003 19:23:30.851779  444809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/pause-844729/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1003 19:23:30.869243  444809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/pause-844729/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1003 19:23:30.886500  444809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/pause-844729/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 19:23:30.903838  444809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/pause-844729/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1003 19:23:30.921060  444809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/certs/286434.pem --> /usr/share/ca-certificates/286434.pem (1338 bytes)
	I1003 19:23:30.938097  444809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem --> /usr/share/ca-certificates/2864342.pem (1708 bytes)
	I1003 19:23:30.962454  444809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 19:23:31.000431  444809 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 19:23:31.026636  444809 ssh_runner.go:195] Run: openssl version
	I1003 19:23:31.035020  444809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/286434.pem && ln -fs /usr/share/ca-certificates/286434.pem /etc/ssl/certs/286434.pem"
	I1003 19:23:31.046323  444809 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/286434.pem
	I1003 19:23:31.051386  444809 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  3 18:34 /usr/share/ca-certificates/286434.pem
	I1003 19:23:31.051513  444809 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/286434.pem
	I1003 19:23:31.180772  444809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/286434.pem /etc/ssl/certs/51391683.0"
	I1003 19:23:31.201717  444809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2864342.pem && ln -fs /usr/share/ca-certificates/2864342.pem /etc/ssl/certs/2864342.pem"
	I1003 19:23:31.221236  444809 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2864342.pem
	I1003 19:23:31.236900  444809 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  3 18:34 /usr/share/ca-certificates/2864342.pem
	I1003 19:23:31.237003  444809 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2864342.pem
	I1003 19:23:31.331982  444809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2864342.pem /etc/ssl/certs/3ec20f2e.0"
	I1003 19:23:31.348417  444809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 19:23:31.360913  444809 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 19:23:31.370626  444809 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  3 18:27 /usr/share/ca-certificates/minikubeCA.pem
	I1003 19:23:31.370759  444809 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 19:23:31.439176  444809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 19:23:31.451766  444809 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 19:23:31.458026  444809 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1003 19:23:31.525372  444809 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1003 19:23:31.590886  444809 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1003 19:23:31.653634  444809 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1003 19:23:31.718387  444809 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1003 19:23:31.766607  444809 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1003 19:23:31.814986  444809 kubeadm.go:400] StartCluster: {Name:pause-844729 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-844729 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 19:23:31.815138  444809 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1003 19:23:31.815227  444809 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1003 19:23:31.874623  444809 cri.go:89] found id: "b4edb0bc8b2e10ddd91a1f18e41714e9b020effe870b870ad1548c51abdd698a"
	I1003 19:23:31.874649  444809 cri.go:89] found id: "b7cace5722ba0dea6c3f841afbdc009c616b089fc76a656644b067a4f8e082ea"
	I1003 19:23:31.874655  444809 cri.go:89] found id: "da0cffc30d07d485c02b0ec61d8a9b3909ac227213b2060ee5749f2e4c309f14"
	I1003 19:23:31.874662  444809 cri.go:89] found id: "45e08f5b3c8750ac2fd35558a348abcfe4889f155ac6450a819fd64a7c7330b8"
	I1003 19:23:31.874666  444809 cri.go:89] found id: "6168e29def1182e29c0bf294c1c3d7237309f9f85b32e17a56b611beab0de0f3"
	I1003 19:23:31.874695  444809 cri.go:89] found id: "e76d5b298ebfdc13c2635e65d607a1504f98294c7e20d1bb64f2ce5a749224ef"
	I1003 19:23:31.874705  444809 cri.go:89] found id: "5bc9d928c66f715d2cb955773ff9a4ceeac2d33a54d32a1544eac9d3e61700fe"
	I1003 19:23:31.874709  444809 cri.go:89] found id: "84fa045c869f127f450bb8752bea5a8159645bcb9dc95bf2aa9c7f45b5311ca2"
	I1003 19:23:31.874712  444809 cri.go:89] found id: "5d124f9877dc3034ad8f48f78e4d24801d20c0a339bfef51da35d2994dbc8ecd"
	I1003 19:23:31.874720  444809 cri.go:89] found id: "857ea2e27fd5446162221b5717f5c41724882e4d6d67b73122cbadfde6751525"
	I1003 19:23:31.874724  444809 cri.go:89] found id: "0e24f3ce9f6cbd2fee0b930845a84383d871589f9e0d5410c93ebc0a1007c92f"
	I1003 19:23:31.874728  444809 cri.go:89] found id: "fd3fe7965793a71c3c6f9b9521b6b0c283e6b5ed6f1f5aee7fbfb482b5af6f32"
	I1003 19:23:31.874733  444809 cri.go:89] found id: "6f18ec5c83f04389f6cce9ba80e373f135129e84c9590239ca46414eb849a154"
	I1003 19:23:31.874743  444809 cri.go:89] found id: "fe077fc7b7398ab6a71e31a253a8c67d7227163b1d3d6d2ff769425cebd43420"
	I1003 19:23:31.874746  444809 cri.go:89] found id: ""
	I1003 19:23:31.874809  444809 ssh_runner.go:195] Run: sudo runc list -f json
	W1003 19:23:31.907330  444809 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-03T19:23:31Z" level=error msg="open /run/runc: no such file or directory"
	I1003 19:23:31.907481  444809 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 19:23:31.923289  444809 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1003 19:23:31.923313  444809 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1003 19:23:31.923404  444809 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1003 19:23:31.942557  444809 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1003 19:23:31.943259  444809 kubeconfig.go:125] found "pause-844729" server: "https://192.168.76.2:8443"
	I1003 19:23:31.944073  444809 kapi.go:59] client config for pause-844729: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21625-284583/.minikube/profiles/pause-844729/client.crt", KeyFile:"/home/jenkins/minikube-integration/21625-284583/.minikube/profiles/pause-844729/client.key", CAFile:"/home/jenkins/minikube-integration/21625-284583/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1003 19:23:31.944769  444809 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1003 19:23:31.944816  444809 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1003 19:23:31.944837  444809 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1003 19:23:31.944857  444809 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1003 19:23:31.944877  444809 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1003 19:23:31.945193  444809 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1003 19:23:31.969028  444809 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1003 19:23:31.969110  444809 kubeadm.go:601] duration metric: took 45.790048ms to restartPrimaryControlPlane
	I1003 19:23:31.969136  444809 kubeadm.go:402] duration metric: took 154.174035ms to StartCluster
	I1003 19:23:31.969169  444809 settings.go:142] acquiring lock: {Name:mkc95577dbc448e3409dfa2b5e53a3a1327cb451 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:23:31.969250  444809 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21625-284583/kubeconfig
	I1003 19:23:31.970167  444809 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/kubeconfig: {Name:mkc1323fd87f4a78231a26d2dab0dff7feecf1e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:23:31.970420  444809 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1003 19:23:31.970830  444809 config.go:182] Loaded profile config "pause-844729": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 19:23:31.970817  444809 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1003 19:23:31.977596  444809 out.go:179] * Enabled addons: 
	I1003 19:23:31.977681  444809 out.go:179] * Verifying Kubernetes components...
	I1003 19:23:31.980667  444809 addons.go:514] duration metric: took 9.832364ms for enable addons: enabled=[]
	I1003 19:23:31.980821  444809 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 19:23:32.535538  432533 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1003 19:23:32.535876  432533 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1003 19:23:32.535924  432533 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 19:23:32.535976  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 19:23:32.596368  432533 cri.go:89] found id: "04bedbd6c6d1a6948852e8a02d927b6d181e1cdd8b926cadb512e6c5a9e2bc18"
	I1003 19:23:32.596387  432533 cri.go:89] found id: ""
	I1003 19:23:32.596396  432533 logs.go:282] 1 containers: [04bedbd6c6d1a6948852e8a02d927b6d181e1cdd8b926cadb512e6c5a9e2bc18]
	I1003 19:23:32.596454  432533 ssh_runner.go:195] Run: which crictl
	I1003 19:23:32.600366  432533 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 19:23:32.600433  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 19:23:32.640549  432533 cri.go:89] found id: ""
	I1003 19:23:32.640572  432533 logs.go:282] 0 containers: []
	W1003 19:23:32.640581  432533 logs.go:284] No container was found matching "etcd"
	I1003 19:23:32.640588  432533 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 19:23:32.640648  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 19:23:32.690008  432533 cri.go:89] found id: ""
	I1003 19:23:32.690030  432533 logs.go:282] 0 containers: []
	W1003 19:23:32.690040  432533 logs.go:284] No container was found matching "coredns"
	I1003 19:23:32.690047  432533 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 19:23:32.690103  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 19:23:32.748194  432533 cri.go:89] found id: "dc5ccb1606b8053522f591bfece54cc0b244422b0fc5af82da81d2215cabb3a1"
	I1003 19:23:32.748212  432533 cri.go:89] found id: ""
	I1003 19:23:32.748227  432533 logs.go:282] 1 containers: [dc5ccb1606b8053522f591bfece54cc0b244422b0fc5af82da81d2215cabb3a1]
	I1003 19:23:32.748285  432533 ssh_runner.go:195] Run: which crictl
	I1003 19:23:32.752292  432533 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 19:23:32.752361  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 19:23:32.810878  432533 cri.go:89] found id: ""
	I1003 19:23:32.810900  432533 logs.go:282] 0 containers: []
	W1003 19:23:32.810908  432533 logs.go:284] No container was found matching "kube-proxy"
	I1003 19:23:32.810916  432533 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 19:23:32.810971  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 19:23:32.861223  432533 cri.go:89] found id: "c62c95f3827d9847bcb196dcef4991555aba77e95ba4e6a5900a14faf7679b30"
	I1003 19:23:32.861297  432533 cri.go:89] found id: ""
	I1003 19:23:32.861321  432533 logs.go:282] 1 containers: [c62c95f3827d9847bcb196dcef4991555aba77e95ba4e6a5900a14faf7679b30]
	I1003 19:23:32.861401  432533 ssh_runner.go:195] Run: which crictl
	I1003 19:23:32.869084  432533 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 19:23:32.869204  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 19:23:32.905409  432533 cri.go:89] found id: ""
	I1003 19:23:32.905485  432533 logs.go:282] 0 containers: []
	W1003 19:23:32.905510  432533 logs.go:284] No container was found matching "kindnet"
	I1003 19:23:32.905529  432533 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1003 19:23:32.905623  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1003 19:23:32.961642  432533 cri.go:89] found id: ""
	I1003 19:23:32.961719  432533 logs.go:282] 0 containers: []
	W1003 19:23:32.961743  432533 logs.go:284] No container was found matching "storage-provisioner"
	I1003 19:23:32.961766  432533 logs.go:123] Gathering logs for kubelet ...
	I1003 19:23:32.961810  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 19:23:33.124526  432533 logs.go:123] Gathering logs for dmesg ...
	I1003 19:23:33.124603  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 19:23:33.147563  432533 logs.go:123] Gathering logs for describe nodes ...
	I1003 19:23:33.147639  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 19:23:33.287865  432533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 19:23:33.287937  432533 logs.go:123] Gathering logs for kube-apiserver [04bedbd6c6d1a6948852e8a02d927b6d181e1cdd8b926cadb512e6c5a9e2bc18] ...
	I1003 19:23:33.287963  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 04bedbd6c6d1a6948852e8a02d927b6d181e1cdd8b926cadb512e6c5a9e2bc18"
	I1003 19:23:33.338747  432533 logs.go:123] Gathering logs for kube-scheduler [dc5ccb1606b8053522f591bfece54cc0b244422b0fc5af82da81d2215cabb3a1] ...
	I1003 19:23:33.338817  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dc5ccb1606b8053522f591bfece54cc0b244422b0fc5af82da81d2215cabb3a1"
	I1003 19:23:33.442636  432533 logs.go:123] Gathering logs for kube-controller-manager [c62c95f3827d9847bcb196dcef4991555aba77e95ba4e6a5900a14faf7679b30] ...
	I1003 19:23:33.442671  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c62c95f3827d9847bcb196dcef4991555aba77e95ba4e6a5900a14faf7679b30"
	I1003 19:23:33.509574  432533 logs.go:123] Gathering logs for CRI-O ...
	I1003 19:23:33.509642  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 19:23:33.585523  432533 logs.go:123] Gathering logs for container status ...
	I1003 19:23:33.585562  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 19:23:36.164770  432533 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1003 19:23:36.165140  432533 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1003 19:23:36.165188  432533 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 19:23:36.165246  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 19:23:36.203237  432533 cri.go:89] found id: "04bedbd6c6d1a6948852e8a02d927b6d181e1cdd8b926cadb512e6c5a9e2bc18"
	I1003 19:23:36.203262  432533 cri.go:89] found id: ""
	I1003 19:23:36.203271  432533 logs.go:282] 1 containers: [04bedbd6c6d1a6948852e8a02d927b6d181e1cdd8b926cadb512e6c5a9e2bc18]
	I1003 19:23:36.203333  432533 ssh_runner.go:195] Run: which crictl
	I1003 19:23:36.207123  432533 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 19:23:36.207196  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 19:23:36.243531  432533 cri.go:89] found id: ""
	I1003 19:23:36.243558  432533 logs.go:282] 0 containers: []
	W1003 19:23:36.243568  432533 logs.go:284] No container was found matching "etcd"
	I1003 19:23:36.243581  432533 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 19:23:36.243638  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 19:23:36.277373  432533 cri.go:89] found id: ""
	I1003 19:23:36.277400  432533 logs.go:282] 0 containers: []
	W1003 19:23:36.277408  432533 logs.go:284] No container was found matching "coredns"
	I1003 19:23:36.277415  432533 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 19:23:36.277473  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 19:23:32.242925  444809 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 19:23:32.265164  444809 node_ready.go:35] waiting up to 6m0s for node "pause-844729" to be "Ready" ...
	I1003 19:23:35.303837  444809 node_ready.go:49] node "pause-844729" is "Ready"
	I1003 19:23:35.303927  444809 node_ready.go:38] duration metric: took 3.038678835s for node "pause-844729" to be "Ready" ...
	I1003 19:23:35.303957  444809 api_server.go:52] waiting for apiserver process to appear ...
	I1003 19:23:35.304046  444809 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 19:23:35.322130  444809 api_server.go:72] duration metric: took 3.35164839s to wait for apiserver process to appear ...
	I1003 19:23:35.322155  444809 api_server.go:88] waiting for apiserver healthz status ...
	I1003 19:23:35.322175  444809 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1003 19:23:35.337698  444809 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1003 19:23:35.337777  444809 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1003 19:23:35.823097  444809 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1003 19:23:35.831611  444809 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1003 19:23:35.831639  444809 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1003 19:23:36.322859  444809 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1003 19:23:36.343739  444809 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1003 19:23:36.343772  444809 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1003 19:23:36.822273  444809 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1003 19:23:36.834964  444809 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1003 19:23:36.836328  444809 api_server.go:141] control plane version: v1.34.1
	I1003 19:23:36.836358  444809 api_server.go:131] duration metric: took 1.514196138s to wait for apiserver health ...
	I1003 19:23:36.836367  444809 system_pods.go:43] waiting for kube-system pods to appear ...
	I1003 19:23:36.841816  444809 system_pods.go:59] 7 kube-system pods found
	I1003 19:23:36.841847  444809 system_pods.go:61] "coredns-66bc5c9577-z7pwb" [427f1d63-2b09-401a-b2f3-2e2a8248c11e] Running
	I1003 19:23:36.841853  444809 system_pods.go:61] "etcd-pause-844729" [560bbe09-f7d4-4218-8305-948f601f4cd4] Running
	I1003 19:23:36.841858  444809 system_pods.go:61] "kindnet-qhksz" [0596aa14-3857-4ba6-a81c-11b8c29baf94] Running
	I1003 19:23:36.841867  444809 system_pods.go:61] "kube-apiserver-pause-844729" [5d812d91-c2b2-4922-95f0-5dd38088ba5c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1003 19:23:36.841875  444809 system_pods.go:61] "kube-controller-manager-pause-844729" [079ad09c-44cf-41f0-b521-df3c4901c134] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1003 19:23:36.841880  444809 system_pods.go:61] "kube-proxy-vxnlc" [b9fa1a51-79ed-470a-a56a-1d830b23760e] Running
	I1003 19:23:36.841887  444809 system_pods.go:61] "kube-scheduler-pause-844729" [bb2ac954-5d5b-4df7-8de8-e0687da43946] Running
	I1003 19:23:36.841892  444809 system_pods.go:74] duration metric: took 5.520177ms to wait for pod list to return data ...
	I1003 19:23:36.841900  444809 default_sa.go:34] waiting for default service account to be created ...
	I1003 19:23:36.845673  444809 default_sa.go:45] found service account: "default"
	I1003 19:23:36.845696  444809 default_sa.go:55] duration metric: took 3.789997ms for default service account to be created ...
	I1003 19:23:36.845706  444809 system_pods.go:116] waiting for k8s-apps to be running ...
	I1003 19:23:36.848641  444809 system_pods.go:86] 7 kube-system pods found
	I1003 19:23:36.848776  444809 system_pods.go:89] "coredns-66bc5c9577-z7pwb" [427f1d63-2b09-401a-b2f3-2e2a8248c11e] Running
	I1003 19:23:36.848817  444809 system_pods.go:89] "etcd-pause-844729" [560bbe09-f7d4-4218-8305-948f601f4cd4] Running
	I1003 19:23:36.848836  444809 system_pods.go:89] "kindnet-qhksz" [0596aa14-3857-4ba6-a81c-11b8c29baf94] Running
	I1003 19:23:36.848856  444809 system_pods.go:89] "kube-apiserver-pause-844729" [5d812d91-c2b2-4922-95f0-5dd38088ba5c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1003 19:23:36.848892  444809 system_pods.go:89] "kube-controller-manager-pause-844729" [079ad09c-44cf-41f0-b521-df3c4901c134] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1003 19:23:36.848920  444809 system_pods.go:89] "kube-proxy-vxnlc" [b9fa1a51-79ed-470a-a56a-1d830b23760e] Running
	I1003 19:23:36.848942  444809 system_pods.go:89] "kube-scheduler-pause-844729" [bb2ac954-5d5b-4df7-8de8-e0687da43946] Running
	I1003 19:23:36.848976  444809 system_pods.go:126] duration metric: took 3.263645ms to wait for k8s-apps to be running ...
	I1003 19:23:36.848998  444809 system_svc.go:44] waiting for kubelet service to be running ....
	I1003 19:23:36.849089  444809 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 19:23:36.863201  444809 system_svc.go:56] duration metric: took 14.192423ms WaitForService to wait for kubelet
	I1003 19:23:36.863281  444809 kubeadm.go:586] duration metric: took 4.892803815s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 19:23:36.863315  444809 node_conditions.go:102] verifying NodePressure condition ...
	I1003 19:23:36.866615  444809 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1003 19:23:36.866693  444809 node_conditions.go:123] node cpu capacity is 2
	I1003 19:23:36.866720  444809 node_conditions.go:105] duration metric: took 3.378905ms to run NodePressure ...
	I1003 19:23:36.866748  444809 start.go:241] waiting for startup goroutines ...
	I1003 19:23:36.866776  444809 start.go:246] waiting for cluster config update ...
	I1003 19:23:36.866808  444809 start.go:255] writing updated cluster config ...
	I1003 19:23:36.867197  444809 ssh_runner.go:195] Run: rm -f paused
	I1003 19:23:36.871182  444809 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1003 19:23:36.871836  444809 kapi.go:59] client config for pause-844729: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21625-284583/.minikube/profiles/pause-844729/client.crt", KeyFile:"/home/jenkins/minikube-integration/21625-284583/.minikube/profiles/pause-844729/client.key", CAFile:"/home/jenkins/minikube-integration/21625-284583/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1003 19:23:36.875274  444809 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-z7pwb" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:23:36.881940  444809 pod_ready.go:94] pod "coredns-66bc5c9577-z7pwb" is "Ready"
	I1003 19:23:36.881969  444809 pod_ready.go:86] duration metric: took 6.665681ms for pod "coredns-66bc5c9577-z7pwb" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:23:36.885925  444809 pod_ready.go:83] waiting for pod "etcd-pause-844729" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:23:36.891374  444809 pod_ready.go:94] pod "etcd-pause-844729" is "Ready"
	I1003 19:23:36.891403  444809 pod_ready.go:86] duration metric: took 5.451096ms for pod "etcd-pause-844729" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:23:36.894724  444809 pod_ready.go:83] waiting for pod "kube-apiserver-pause-844729" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:23:36.331428  432533 cri.go:89] found id: "dc5ccb1606b8053522f591bfece54cc0b244422b0fc5af82da81d2215cabb3a1"
	I1003 19:23:36.331452  432533 cri.go:89] found id: ""
	I1003 19:23:36.331471  432533 logs.go:282] 1 containers: [dc5ccb1606b8053522f591bfece54cc0b244422b0fc5af82da81d2215cabb3a1]
	I1003 19:23:36.331528  432533 ssh_runner.go:195] Run: which crictl
	I1003 19:23:36.335468  432533 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 19:23:36.335550  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 19:23:36.380900  432533 cri.go:89] found id: ""
	I1003 19:23:36.380937  432533 logs.go:282] 0 containers: []
	W1003 19:23:36.380946  432533 logs.go:284] No container was found matching "kube-proxy"
	I1003 19:23:36.380953  432533 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 19:23:36.381020  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 19:23:36.416759  432533 cri.go:89] found id: "c62c95f3827d9847bcb196dcef4991555aba77e95ba4e6a5900a14faf7679b30"
	I1003 19:23:36.416791  432533 cri.go:89] found id: ""
	I1003 19:23:36.416803  432533 logs.go:282] 1 containers: [c62c95f3827d9847bcb196dcef4991555aba77e95ba4e6a5900a14faf7679b30]
	I1003 19:23:36.416870  432533 ssh_runner.go:195] Run: which crictl
	I1003 19:23:36.421209  432533 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 19:23:36.421300  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 19:23:36.452859  432533 cri.go:89] found id: ""
	I1003 19:23:36.452898  432533 logs.go:282] 0 containers: []
	W1003 19:23:36.452907  432533 logs.go:284] No container was found matching "kindnet"
	I1003 19:23:36.452913  432533 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1003 19:23:36.452979  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1003 19:23:36.482928  432533 cri.go:89] found id: ""
	I1003 19:23:36.482972  432533 logs.go:282] 0 containers: []
	W1003 19:23:36.482981  432533 logs.go:284] No container was found matching "storage-provisioner"
	I1003 19:23:36.482991  432533 logs.go:123] Gathering logs for kube-scheduler [dc5ccb1606b8053522f591bfece54cc0b244422b0fc5af82da81d2215cabb3a1] ...
	I1003 19:23:36.483004  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dc5ccb1606b8053522f591bfece54cc0b244422b0fc5af82da81d2215cabb3a1"
	I1003 19:23:36.552196  432533 logs.go:123] Gathering logs for kube-controller-manager [c62c95f3827d9847bcb196dcef4991555aba77e95ba4e6a5900a14faf7679b30] ...
	I1003 19:23:36.552235  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c62c95f3827d9847bcb196dcef4991555aba77e95ba4e6a5900a14faf7679b30"
	I1003 19:23:36.580885  432533 logs.go:123] Gathering logs for CRI-O ...
	I1003 19:23:36.580910  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 19:23:36.643579  432533 logs.go:123] Gathering logs for container status ...
	I1003 19:23:36.643654  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 19:23:36.689003  432533 logs.go:123] Gathering logs for kubelet ...
	I1003 19:23:36.689026  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 19:23:36.821480  432533 logs.go:123] Gathering logs for dmesg ...
	I1003 19:23:36.821516  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 19:23:36.839553  432533 logs.go:123] Gathering logs for describe nodes ...
	I1003 19:23:36.839585  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 19:23:36.944938  432533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 19:23:36.944973  432533 logs.go:123] Gathering logs for kube-apiserver [04bedbd6c6d1a6948852e8a02d927b6d181e1cdd8b926cadb512e6c5a9e2bc18] ...
	I1003 19:23:36.944989  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 04bedbd6c6d1a6948852e8a02d927b6d181e1cdd8b926cadb512e6c5a9e2bc18"
	I1003 19:23:39.478300  432533 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1003 19:23:39.478732  432533 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1003 19:23:39.478782  432533 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 19:23:39.478835  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 19:23:39.509516  432533 cri.go:89] found id: "04bedbd6c6d1a6948852e8a02d927b6d181e1cdd8b926cadb512e6c5a9e2bc18"
	I1003 19:23:39.509548  432533 cri.go:89] found id: ""
	I1003 19:23:39.509557  432533 logs.go:282] 1 containers: [04bedbd6c6d1a6948852e8a02d927b6d181e1cdd8b926cadb512e6c5a9e2bc18]
	I1003 19:23:39.509614  432533 ssh_runner.go:195] Run: which crictl
	I1003 19:23:39.513425  432533 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 19:23:39.513495  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 19:23:39.539280  432533 cri.go:89] found id: ""
	I1003 19:23:39.539303  432533 logs.go:282] 0 containers: []
	W1003 19:23:39.539311  432533 logs.go:284] No container was found matching "etcd"
	I1003 19:23:39.539318  432533 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 19:23:39.539418  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 19:23:39.565413  432533 cri.go:89] found id: ""
	I1003 19:23:39.565435  432533 logs.go:282] 0 containers: []
	W1003 19:23:39.565443  432533 logs.go:284] No container was found matching "coredns"
	I1003 19:23:39.565449  432533 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 19:23:39.565506  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 19:23:39.590330  432533 cri.go:89] found id: "dc5ccb1606b8053522f591bfece54cc0b244422b0fc5af82da81d2215cabb3a1"
	I1003 19:23:39.590353  432533 cri.go:89] found id: ""
	I1003 19:23:39.590362  432533 logs.go:282] 1 containers: [dc5ccb1606b8053522f591bfece54cc0b244422b0fc5af82da81d2215cabb3a1]
	I1003 19:23:39.590437  432533 ssh_runner.go:195] Run: which crictl
	I1003 19:23:39.593967  432533 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 19:23:39.594076  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 19:23:39.623196  432533 cri.go:89] found id: ""
	I1003 19:23:39.623227  432533 logs.go:282] 0 containers: []
	W1003 19:23:39.623237  432533 logs.go:284] No container was found matching "kube-proxy"
	I1003 19:23:39.623243  432533 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 19:23:39.623307  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 19:23:39.651035  432533 cri.go:89] found id: "c62c95f3827d9847bcb196dcef4991555aba77e95ba4e6a5900a14faf7679b30"
	I1003 19:23:39.651065  432533 cri.go:89] found id: ""
	I1003 19:23:39.651074  432533 logs.go:282] 1 containers: [c62c95f3827d9847bcb196dcef4991555aba77e95ba4e6a5900a14faf7679b30]
	I1003 19:23:39.651136  432533 ssh_runner.go:195] Run: which crictl
	I1003 19:23:39.654626  432533 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 19:23:39.654694  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 19:23:39.683461  432533 cri.go:89] found id: ""
	I1003 19:23:39.683534  432533 logs.go:282] 0 containers: []
	W1003 19:23:39.683557  432533 logs.go:284] No container was found matching "kindnet"
	I1003 19:23:39.683577  432533 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1003 19:23:39.683663  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1003 19:23:39.715039  432533 cri.go:89] found id: ""
	I1003 19:23:39.715064  432533 logs.go:282] 0 containers: []
	W1003 19:23:39.715072  432533 logs.go:284] No container was found matching "storage-provisioner"
	I1003 19:23:39.715082  432533 logs.go:123] Gathering logs for container status ...
	I1003 19:23:39.715093  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 19:23:39.743331  432533 logs.go:123] Gathering logs for kubelet ...
	I1003 19:23:39.743362  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 19:23:39.855453  432533 logs.go:123] Gathering logs for dmesg ...
	I1003 19:23:39.855490  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 19:23:39.872491  432533 logs.go:123] Gathering logs for describe nodes ...
	I1003 19:23:39.872521  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 19:23:39.939187  432533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 19:23:39.939207  432533 logs.go:123] Gathering logs for kube-apiserver [04bedbd6c6d1a6948852e8a02d927b6d181e1cdd8b926cadb512e6c5a9e2bc18] ...
	I1003 19:23:39.939221  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 04bedbd6c6d1a6948852e8a02d927b6d181e1cdd8b926cadb512e6c5a9e2bc18"
	I1003 19:23:39.984125  432533 logs.go:123] Gathering logs for kube-scheduler [dc5ccb1606b8053522f591bfece54cc0b244422b0fc5af82da81d2215cabb3a1] ...
	I1003 19:23:39.984159  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dc5ccb1606b8053522f591bfece54cc0b244422b0fc5af82da81d2215cabb3a1"
	I1003 19:23:40.057150  432533 logs.go:123] Gathering logs for kube-controller-manager [c62c95f3827d9847bcb196dcef4991555aba77e95ba4e6a5900a14faf7679b30] ...
	I1003 19:23:40.057186  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c62c95f3827d9847bcb196dcef4991555aba77e95ba4e6a5900a14faf7679b30"
	I1003 19:23:40.085486  432533 logs.go:123] Gathering logs for CRI-O ...
	I1003 19:23:40.085518  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1003 19:23:38.899729  444809 pod_ready.go:104] pod "kube-apiserver-pause-844729" is not "Ready", error: <nil>
	W1003 19:23:40.900924  444809 pod_ready.go:104] pod "kube-apiserver-pause-844729" is not "Ready", error: <nil>
	I1003 19:23:42.647553  432533 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1003 19:23:42.647994  432533 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1003 19:23:42.648041  432533 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 19:23:42.648103  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 19:23:42.680331  432533 cri.go:89] found id: "04bedbd6c6d1a6948852e8a02d927b6d181e1cdd8b926cadb512e6c5a9e2bc18"
	I1003 19:23:42.680355  432533 cri.go:89] found id: ""
	I1003 19:23:42.680363  432533 logs.go:282] 1 containers: [04bedbd6c6d1a6948852e8a02d927b6d181e1cdd8b926cadb512e6c5a9e2bc18]
	I1003 19:23:42.680419  432533 ssh_runner.go:195] Run: which crictl
	I1003 19:23:42.683970  432533 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 19:23:42.684080  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 19:23:42.711803  432533 cri.go:89] found id: ""
	I1003 19:23:42.711838  432533 logs.go:282] 0 containers: []
	W1003 19:23:42.711847  432533 logs.go:284] No container was found matching "etcd"
	I1003 19:23:42.711869  432533 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 19:23:42.711967  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 19:23:42.738657  432533 cri.go:89] found id: ""
	I1003 19:23:42.738698  432533 logs.go:282] 0 containers: []
	W1003 19:23:42.738707  432533 logs.go:284] No container was found matching "coredns"
	I1003 19:23:42.738713  432533 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 19:23:42.738804  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 19:23:42.767317  432533 cri.go:89] found id: "dc5ccb1606b8053522f591bfece54cc0b244422b0fc5af82da81d2215cabb3a1"
	I1003 19:23:42.767338  432533 cri.go:89] found id: ""
	I1003 19:23:42.767347  432533 logs.go:282] 1 containers: [dc5ccb1606b8053522f591bfece54cc0b244422b0fc5af82da81d2215cabb3a1]
	I1003 19:23:42.767404  432533 ssh_runner.go:195] Run: which crictl
	I1003 19:23:42.771217  432533 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 19:23:42.771284  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 19:23:42.798315  432533 cri.go:89] found id: ""
	I1003 19:23:42.798362  432533 logs.go:282] 0 containers: []
	W1003 19:23:42.798388  432533 logs.go:284] No container was found matching "kube-proxy"
	I1003 19:23:42.798398  432533 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 19:23:42.798484  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 19:23:42.825681  432533 cri.go:89] found id: "c62c95f3827d9847bcb196dcef4991555aba77e95ba4e6a5900a14faf7679b30"
	I1003 19:23:42.825705  432533 cri.go:89] found id: ""
	I1003 19:23:42.825714  432533 logs.go:282] 1 containers: [c62c95f3827d9847bcb196dcef4991555aba77e95ba4e6a5900a14faf7679b30]
	I1003 19:23:42.825790  432533 ssh_runner.go:195] Run: which crictl
	I1003 19:23:42.829471  432533 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 19:23:42.829572  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 19:23:42.856918  432533 cri.go:89] found id: ""
	I1003 19:23:42.856949  432533 logs.go:282] 0 containers: []
	W1003 19:23:42.856959  432533 logs.go:284] No container was found matching "kindnet"
	I1003 19:23:42.856965  432533 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1003 19:23:42.857026  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1003 19:23:42.883631  432533 cri.go:89] found id: ""
	I1003 19:23:42.883653  432533 logs.go:282] 0 containers: []
	W1003 19:23:42.883661  432533 logs.go:284] No container was found matching "storage-provisioner"
	I1003 19:23:42.883670  432533 logs.go:123] Gathering logs for dmesg ...
	I1003 19:23:42.883681  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 19:23:42.904093  432533 logs.go:123] Gathering logs for describe nodes ...
	I1003 19:23:42.904161  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 19:23:42.974734  432533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 19:23:42.974761  432533 logs.go:123] Gathering logs for kube-apiserver [04bedbd6c6d1a6948852e8a02d927b6d181e1cdd8b926cadb512e6c5a9e2bc18] ...
	I1003 19:23:42.974775  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 04bedbd6c6d1a6948852e8a02d927b6d181e1cdd8b926cadb512e6c5a9e2bc18"
	I1003 19:23:43.014144  432533 logs.go:123] Gathering logs for kube-scheduler [dc5ccb1606b8053522f591bfece54cc0b244422b0fc5af82da81d2215cabb3a1] ...
	I1003 19:23:43.014184  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dc5ccb1606b8053522f591bfece54cc0b244422b0fc5af82da81d2215cabb3a1"
	I1003 19:23:43.080172  432533 logs.go:123] Gathering logs for kube-controller-manager [c62c95f3827d9847bcb196dcef4991555aba77e95ba4e6a5900a14faf7679b30] ...
	I1003 19:23:43.080203  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c62c95f3827d9847bcb196dcef4991555aba77e95ba4e6a5900a14faf7679b30"
	I1003 19:23:43.140202  432533 logs.go:123] Gathering logs for CRI-O ...
	I1003 19:23:43.140229  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 19:23:43.201495  432533 logs.go:123] Gathering logs for container status ...
	I1003 19:23:43.201531  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 19:23:43.232539  432533 logs.go:123] Gathering logs for kubelet ...
	I1003 19:23:43.232566  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 19:23:45.859028  432533 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1003 19:23:45.859459  432533 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1003 19:23:45.859503  432533 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 19:23:45.859561  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 19:23:45.892662  432533 cri.go:89] found id: "04bedbd6c6d1a6948852e8a02d927b6d181e1cdd8b926cadb512e6c5a9e2bc18"
	I1003 19:23:45.892681  432533 cri.go:89] found id: ""
	I1003 19:23:45.892689  432533 logs.go:282] 1 containers: [04bedbd6c6d1a6948852e8a02d927b6d181e1cdd8b926cadb512e6c5a9e2bc18]
	I1003 19:23:45.892779  432533 ssh_runner.go:195] Run: which crictl
	I1003 19:23:45.898188  432533 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 19:23:45.898258  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 19:23:45.932330  432533 cri.go:89] found id: ""
	I1003 19:23:45.932353  432533 logs.go:282] 0 containers: []
	W1003 19:23:45.932362  432533 logs.go:284] No container was found matching "etcd"
	I1003 19:23:45.932368  432533 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 19:23:45.932430  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 19:23:45.962429  432533 cri.go:89] found id: ""
	I1003 19:23:45.962452  432533 logs.go:282] 0 containers: []
	W1003 19:23:45.962460  432533 logs.go:284] No container was found matching "coredns"
	I1003 19:23:45.962466  432533 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 19:23:45.962524  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 19:23:45.989699  432533 cri.go:89] found id: "dc5ccb1606b8053522f591bfece54cc0b244422b0fc5af82da81d2215cabb3a1"
	I1003 19:23:45.989722  432533 cri.go:89] found id: ""
	I1003 19:23:45.989732  432533 logs.go:282] 1 containers: [dc5ccb1606b8053522f591bfece54cc0b244422b0fc5af82da81d2215cabb3a1]
	I1003 19:23:45.989793  432533 ssh_runner.go:195] Run: which crictl
	I1003 19:23:45.993524  432533 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 19:23:45.993593  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 19:23:46.020602  432533 cri.go:89] found id: ""
	I1003 19:23:46.020631  432533 logs.go:282] 0 containers: []
	W1003 19:23:46.020640  432533 logs.go:284] No container was found matching "kube-proxy"
	I1003 19:23:46.020647  432533 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 19:23:46.020710  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 19:23:46.048002  432533 cri.go:89] found id: "c62c95f3827d9847bcb196dcef4991555aba77e95ba4e6a5900a14faf7679b30"
	I1003 19:23:46.048026  432533 cri.go:89] found id: ""
	I1003 19:23:46.048034  432533 logs.go:282] 1 containers: [c62c95f3827d9847bcb196dcef4991555aba77e95ba4e6a5900a14faf7679b30]
	I1003 19:23:46.048091  432533 ssh_runner.go:195] Run: which crictl
	I1003 19:23:46.051883  432533 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 19:23:46.051966  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 19:23:46.081600  432533 cri.go:89] found id: ""
	I1003 19:23:46.081626  432533 logs.go:282] 0 containers: []
	W1003 19:23:46.081635  432533 logs.go:284] No container was found matching "kindnet"
	I1003 19:23:46.081642  432533 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1003 19:23:46.081706  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1003 19:23:46.135656  432533 cri.go:89] found id: ""
	I1003 19:23:46.135685  432533 logs.go:282] 0 containers: []
	W1003 19:23:46.135694  432533 logs.go:284] No container was found matching "storage-provisioner"
	I1003 19:23:46.135704  432533 logs.go:123] Gathering logs for kube-controller-manager [c62c95f3827d9847bcb196dcef4991555aba77e95ba4e6a5900a14faf7679b30] ...
	I1003 19:23:46.135716  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c62c95f3827d9847bcb196dcef4991555aba77e95ba4e6a5900a14faf7679b30"
	I1003 19:23:46.200303  432533 logs.go:123] Gathering logs for CRI-O ...
	I1003 19:23:46.200331  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 19:23:46.275263  432533 logs.go:123] Gathering logs for container status ...
	I1003 19:23:46.275349  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1003 19:23:42.901251  444809 pod_ready.go:104] pod "kube-apiserver-pause-844729" is not "Ready", error: <nil>
	W1003 19:23:45.401779  444809 pod_ready.go:104] pod "kube-apiserver-pause-844729" is not "Ready", error: <nil>
	I1003 19:23:46.404201  444809 pod_ready.go:94] pod "kube-apiserver-pause-844729" is "Ready"
	I1003 19:23:46.404224  444809 pod_ready.go:86] duration metric: took 9.509472841s for pod "kube-apiserver-pause-844729" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:23:46.411223  444809 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-844729" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:23:46.422418  444809 pod_ready.go:94] pod "kube-controller-manager-pause-844729" is "Ready"
	I1003 19:23:46.422441  444809 pod_ready.go:86] duration metric: took 11.195284ms for pod "kube-controller-manager-pause-844729" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:23:46.426015  444809 pod_ready.go:83] waiting for pod "kube-proxy-vxnlc" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:23:46.436115  444809 pod_ready.go:94] pod "kube-proxy-vxnlc" is "Ready"
	I1003 19:23:46.436136  444809 pod_ready.go:86] duration metric: took 10.104098ms for pod "kube-proxy-vxnlc" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:23:46.443413  444809 pod_ready.go:83] waiting for pod "kube-scheduler-pause-844729" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:23:46.597512  444809 pod_ready.go:94] pod "kube-scheduler-pause-844729" is "Ready"
	I1003 19:23:46.597536  444809 pod_ready.go:86] duration metric: took 154.105274ms for pod "kube-scheduler-pause-844729" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:23:46.597547  444809 pod_ready.go:40] duration metric: took 9.726334779s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1003 19:23:46.667748  444809 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1003 19:23:46.670958  444809 out.go:179] * Done! kubectl is now configured to use "pause-844729" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 03 19:23:31 pause-844729 crio[2057]: time="2025-10-03T19:23:31.262941064Z" level=info msg="Created container 6168e29def1182e29c0bf294c1c3d7237309f9f85b32e17a56b611beab0de0f3: kube-system/kube-proxy-vxnlc/kube-proxy" id=5913cde7-fc45-475e-9031-3a820599154b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:23:31 pause-844729 crio[2057]: time="2025-10-03T19:23:31.264032833Z" level=info msg="Starting container: 6168e29def1182e29c0bf294c1c3d7237309f9f85b32e17a56b611beab0de0f3" id=e20c315e-f72f-4306-b4d1-bcc9cde9ceaa name=/runtime.v1.RuntimeService/StartContainer
	Oct 03 19:23:31 pause-844729 crio[2057]: time="2025-10-03T19:23:31.271441722Z" level=info msg="Started container" PID=2290 containerID=45e08f5b3c8750ac2fd35558a348abcfe4889f155ac6450a819fd64a7c7330b8 description=kube-system/coredns-66bc5c9577-z7pwb/coredns id=f07528cc-9c18-4fc2-a250-7063dc0a4f2d name=/runtime.v1.RuntimeService/StartContainer sandboxID=5cc933b35d332d0d876c4eb2f62af8e09a9143f4bde00dbd2713f86227e431c5
	Oct 03 19:23:31 pause-844729 crio[2057]: time="2025-10-03T19:23:31.27372286Z" level=info msg="Created container da0cffc30d07d485c02b0ec61d8a9b3909ac227213b2060ee5749f2e4c309f14: kube-system/kube-scheduler-pause-844729/kube-scheduler" id=18bc5083-fe63-49bc-9260-109a8fa181a9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:23:31 pause-844729 crio[2057]: time="2025-10-03T19:23:31.294021135Z" level=info msg="Started container" PID=2282 containerID=6168e29def1182e29c0bf294c1c3d7237309f9f85b32e17a56b611beab0de0f3 description=kube-system/kube-proxy-vxnlc/kube-proxy id=e20c315e-f72f-4306-b4d1-bcc9cde9ceaa name=/runtime.v1.RuntimeService/StartContainer sandboxID=644fa1917083fbc943674808dbbdd1d251a3fd88a79fef84039bf218ca1695b8
	Oct 03 19:23:31 pause-844729 crio[2057]: time="2025-10-03T19:23:31.294933142Z" level=info msg="Starting container: da0cffc30d07d485c02b0ec61d8a9b3909ac227213b2060ee5749f2e4c309f14" id=442e6b27-7dae-46b5-9880-98794fa09c69 name=/runtime.v1.RuntimeService/StartContainer
	Oct 03 19:23:31 pause-844729 crio[2057]: time="2025-10-03T19:23:31.30118577Z" level=info msg="Started container" PID=2288 containerID=da0cffc30d07d485c02b0ec61d8a9b3909ac227213b2060ee5749f2e4c309f14 description=kube-system/kube-scheduler-pause-844729/kube-scheduler id=442e6b27-7dae-46b5-9880-98794fa09c69 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ca7502bb97a300a30f11e885c6606dfcc69ed119dd79ec50f2e178e7692fd36a
	Oct 03 19:23:31 pause-844729 crio[2057]: time="2025-10-03T19:23:31.301943083Z" level=info msg="Created container b7cace5722ba0dea6c3f841afbdc009c616b089fc76a656644b067a4f8e082ea: kube-system/etcd-pause-844729/etcd" id=c2779ed4-9929-4231-b1fa-eab9f2ec0481 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:23:31 pause-844729 crio[2057]: time="2025-10-03T19:23:31.302220142Z" level=info msg="Created container b4edb0bc8b2e10ddd91a1f18e41714e9b020effe870b870ad1548c51abdd698a: kube-system/kube-apiserver-pause-844729/kube-apiserver" id=3c7277d9-edb0-41f2-8505-59a27e323354 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:23:31 pause-844729 crio[2057]: time="2025-10-03T19:23:31.30335201Z" level=info msg="Starting container: b4edb0bc8b2e10ddd91a1f18e41714e9b020effe870b870ad1548c51abdd698a" id=e10fff39-d34b-4d16-b490-42ed334bb5a0 name=/runtime.v1.RuntimeService/StartContainer
	Oct 03 19:23:31 pause-844729 crio[2057]: time="2025-10-03T19:23:31.303530483Z" level=info msg="Starting container: b7cace5722ba0dea6c3f841afbdc009c616b089fc76a656644b067a4f8e082ea" id=9a4e8af7-6a4b-4f8b-bf57-ab37f1a2d971 name=/runtime.v1.RuntimeService/StartContainer
	Oct 03 19:23:31 pause-844729 crio[2057]: time="2025-10-03T19:23:31.324040248Z" level=info msg="Started container" PID=2315 containerID=b4edb0bc8b2e10ddd91a1f18e41714e9b020effe870b870ad1548c51abdd698a description=kube-system/kube-apiserver-pause-844729/kube-apiserver id=e10fff39-d34b-4d16-b490-42ed334bb5a0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0ddc7b2b69ef2ab3ec3a577469acdd1b7acef0085201a5fcde402b8d38aa7a50
	Oct 03 19:23:31 pause-844729 crio[2057]: time="2025-10-03T19:23:31.324319145Z" level=info msg="Started container" PID=2317 containerID=b7cace5722ba0dea6c3f841afbdc009c616b089fc76a656644b067a4f8e082ea description=kube-system/etcd-pause-844729/etcd id=9a4e8af7-6a4b-4f8b-bf57-ab37f1a2d971 name=/runtime.v1.RuntimeService/StartContainer sandboxID=efa997eec887f8dc5f8eefa59d472018f4b4caf06d80c163980f4f5e0a747155
	Oct 03 19:23:41 pause-844729 crio[2057]: time="2025-10-03T19:23:41.398406412Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 03 19:23:41 pause-844729 crio[2057]: time="2025-10-03T19:23:41.403312063Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 03 19:23:41 pause-844729 crio[2057]: time="2025-10-03T19:23:41.403476152Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 03 19:23:41 pause-844729 crio[2057]: time="2025-10-03T19:23:41.403555645Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 03 19:23:41 pause-844729 crio[2057]: time="2025-10-03T19:23:41.406955095Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 03 19:23:41 pause-844729 crio[2057]: time="2025-10-03T19:23:41.406988671Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 03 19:23:41 pause-844729 crio[2057]: time="2025-10-03T19:23:41.407011293Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 03 19:23:41 pause-844729 crio[2057]: time="2025-10-03T19:23:41.410151745Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 03 19:23:41 pause-844729 crio[2057]: time="2025-10-03T19:23:41.410185263Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 03 19:23:41 pause-844729 crio[2057]: time="2025-10-03T19:23:41.410207015Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 03 19:23:41 pause-844729 crio[2057]: time="2025-10-03T19:23:41.413261262Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 03 19:23:41 pause-844729 crio[2057]: time="2025-10-03T19:23:41.413295175Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	b4edb0bc8b2e1       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   18 seconds ago       Running             kube-apiserver            1                   0ddc7b2b69ef2       kube-apiserver-pause-844729            kube-system
	b7cace5722ba0       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   18 seconds ago       Running             etcd                      1                   efa997eec887f       etcd-pause-844729                      kube-system
	da0cffc30d07d       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   18 seconds ago       Running             kube-scheduler            1                   ca7502bb97a30       kube-scheduler-pause-844729            kube-system
	45e08f5b3c875       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   18 seconds ago       Running             coredns                   1                   5cc933b35d332       coredns-66bc5c9577-z7pwb               kube-system
	6168e29def118       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   18 seconds ago       Running             kube-proxy                1                   644fa1917083f       kube-proxy-vxnlc                       kube-system
	e76d5b298ebfd       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   18 seconds ago       Running             kindnet-cni               1                   0315418422ebd       kindnet-qhksz                          kube-system
	5bc9d928c66f7       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   18 seconds ago       Running             kube-controller-manager   1                   e33cde34f7d3d       kube-controller-manager-pause-844729   kube-system
	84fa045c869f1       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   30 seconds ago       Exited              coredns                   0                   5cc933b35d332       coredns-66bc5c9577-z7pwb               kube-system
	5d124f9877dc3       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   0315418422ebd       kindnet-qhksz                          kube-system
	857ea2e27fd54       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   644fa1917083f       kube-proxy-vxnlc                       kube-system
	0e24f3ce9f6cb       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   efa997eec887f       etcd-pause-844729                      kube-system
	fd3fe7965793a       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   0ddc7b2b69ef2       kube-apiserver-pause-844729            kube-system
	6f18ec5c83f04       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   ca7502bb97a30       kube-scheduler-pause-844729            kube-system
	fe077fc7b7398       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   e33cde34f7d3d       kube-controller-manager-pause-844729   kube-system
	
	
	==> coredns [45e08f5b3c8750ac2fd35558a348abcfe4889f155ac6450a819fd64a7c7330b8] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40072 - 43023 "HINFO IN 8662707546940497813.8488772312819346048. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012124736s
	
	
	==> coredns [84fa045c869f127f450bb8752bea5a8159645bcb9dc95bf2aa9c7f45b5311ca2] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53695 - 65056 "HINFO IN 7853163631885255782.674258907920431086. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.012733427s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-844729
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-844729
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a43873c79fc22f8b1ccd29d3dfa635d392b09335
	                    minikube.k8s.io/name=pause-844729
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_03T19_22_32_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 03 Oct 2025 19:22:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-844729
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 03 Oct 2025 19:23:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 03 Oct 2025 19:23:45 +0000   Fri, 03 Oct 2025 19:22:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 03 Oct 2025 19:23:45 +0000   Fri, 03 Oct 2025 19:22:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 03 Oct 2025 19:23:45 +0000   Fri, 03 Oct 2025 19:22:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 03 Oct 2025 19:23:45 +0000   Fri, 03 Oct 2025 19:23:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-844729
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 02d1e61509b2434098162c1013a2ff8e
	  System UUID:                0531cd00-e7b4-4767-9f36-05e850ecbd5e
	  Boot ID:                    3762136e-8bec-4104-a5cb-0b1976f6048e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-z7pwb                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     72s
	  kube-system                 etcd-pause-844729                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         77s
	  kube-system                 kindnet-qhksz                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      73s
	  kube-system                 kube-apiserver-pause-844729             250m (12%)    0 (0%)      0 (0%)           0 (0%)         77s
	  kube-system                 kube-controller-manager-pause-844729    200m (10%)    0 (0%)      0 (0%)           0 (0%)         77s
	  kube-system                 kube-proxy-vxnlc                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         73s
	  kube-system                 kube-scheduler-pause-844729             100m (5%)     0 (0%)      0 (0%)           0 (0%)         77s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 71s                kube-proxy       
	  Normal   Starting                 12s                kube-proxy       
	  Warning  CgroupV1                 86s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  86s (x8 over 86s)  kubelet          Node pause-844729 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    86s (x8 over 86s)  kubelet          Node pause-844729 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     86s (x8 over 86s)  kubelet          Node pause-844729 status is now: NodeHasSufficientPID
	  Normal   Starting                 78s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 78s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  78s                kubelet          Node pause-844729 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    78s                kubelet          Node pause-844729 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     78s                kubelet          Node pause-844729 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           74s                node-controller  Node pause-844729 event: Registered Node pause-844729 in Controller
	  Normal   NodeReady                31s                kubelet          Node pause-844729 status is now: NodeReady
	  Normal   RegisteredNode           11s                node-controller  Node pause-844729 event: Registered Node pause-844729 in Controller
	
	
	==> dmesg <==
	[Oct 3 18:56] overlayfs: idmapped layers are currently not supported
	[  +3.564365] overlayfs: idmapped layers are currently not supported
	[Oct 3 18:58] overlayfs: idmapped layers are currently not supported
	[Oct 3 18:59] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:00] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:05] overlayfs: idmapped layers are currently not supported
	[ +33.149550] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:07] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:08] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:09] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:10] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:11] overlayfs: idmapped layers are currently not supported
	[  +4.287643] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:12] overlayfs: idmapped layers are currently not supported
	[ +24.839009] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:13] overlayfs: idmapped layers are currently not supported
	[ +26.493253] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:15] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:16] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:17] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000010] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Oct 3 19:18] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:20] overlayfs: idmapped layers are currently not supported
	[ +32.018892] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:22] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [0e24f3ce9f6cbd2fee0b930845a84383d871589f9e0d5410c93ebc0a1007c92f] <==
	{"level":"warn","ts":"2025-10-03T19:22:27.211672Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:22:27.241236Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:22:27.304946Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:22:27.330681Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:22:27.354034Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:22:27.418130Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:22:27.544841Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41924","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-03T19:23:23.301959Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-03T19:23:23.302010Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-844729","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"error","ts":"2025-10-03T19:23:23.302096Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-03T19:23:23.447356Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-03T19:23:23.447497Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2025-10-03T19:23:23.447708Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-03T19:23:23.447756Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"error","ts":"2025-10-03T19:23:23.447269Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"warn","ts":"2025-10-03T19:23:23.448113Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-03T19:23:23.448246Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-03T19:23:23.448280Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-03T19:23:23.448369Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-03T19:23:23.448428Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-03T19:23:23.448462Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-03T19:23:23.451296Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"error","ts":"2025-10-03T19:23:23.451426Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-03T19:23:23.451500Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-03T19:23:23.451535Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-844729","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> etcd [b7cace5722ba0dea6c3f841afbdc009c616b089fc76a656644b067a4f8e082ea] <==
	{"level":"warn","ts":"2025-10-03T19:23:33.766283Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:23:33.793003Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:23:33.822940Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:23:33.853216Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:23:33.882833Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:23:33.914995Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:23:33.944835Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:23:33.983369Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:23:33.990703Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:23:34.019393Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:23:34.062052Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:23:34.099320Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:23:34.131068Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:23:34.157325Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:23:34.201341Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:23:34.225216Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:23:34.257279Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:23:34.269491Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:23:34.292063Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:23:34.310935Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:23:34.327274Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:23:34.361847Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:23:34.385955Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:23:34.414948Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:23:34.496349Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50826","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 19:23:49 up  2:06,  0 user,  load average: 2.85, 3.23, 2.58
	Linux pause-844729 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5d124f9877dc3034ad8f48f78e4d24801d20c0a339bfef51da35d2994dbc8ecd] <==
	I1003 19:22:38.194205       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1003 19:22:38.194455       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1003 19:22:38.194595       1 main.go:148] setting mtu 1500 for CNI 
	I1003 19:22:38.194614       1 main.go:178] kindnetd IP family: "ipv4"
	I1003 19:22:38.194624       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-03T19:22:38Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1003 19:22:38.395011       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1003 19:22:38.395037       1 controller.go:381] "Waiting for informer caches to sync"
	I1003 19:22:38.395047       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1003 19:22:38.395330       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1003 19:23:08.395141       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1003 19:23:08.395141       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1003 19:23:08.395372       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1003 19:23:08.396409       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1003 19:23:09.995523       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1003 19:23:09.995637       1 metrics.go:72] Registering metrics
	I1003 19:23:09.995732       1 controller.go:711] "Syncing nftables rules"
	I1003 19:23:18.395564       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1003 19:23:18.395625       1 main.go:301] handling current node
	
	
	==> kindnet [e76d5b298ebfdc13c2635e65d607a1504f98294c7e20d1bb64f2ce5a749224ef] <==
	I1003 19:23:31.131147       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1003 19:23:31.196289       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1003 19:23:31.196435       1 main.go:148] setting mtu 1500 for CNI 
	I1003 19:23:31.196448       1 main.go:178] kindnetd IP family: "ipv4"
	I1003 19:23:31.196462       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-03T19:23:31Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	E1003 19:23:31.421053       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1003 19:23:31.421459       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1003 19:23:31.421471       1 controller.go:381] "Waiting for informer caches to sync"
	I1003 19:23:31.421485       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1003 19:23:31.421767       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1003 19:23:31.421877       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1003 19:23:31.421949       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1003 19:23:31.422238       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1003 19:23:35.403281       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1003 19:23:35.403411       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1003 19:23:35.403482       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"networkpolicies\" in API group \"networking.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1003 19:23:35.403568       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1003 19:23:38.521585       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1003 19:23:38.521616       1 metrics.go:72] Registering metrics
	I1003 19:23:38.521685       1 controller.go:711] "Syncing nftables rules"
	I1003 19:23:41.397987       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1003 19:23:41.398115       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b4edb0bc8b2e10ddd91a1f18e41714e9b020effe870b870ad1548c51abdd698a] <==
	I1003 19:23:35.374888       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1003 19:23:35.374985       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1003 19:23:35.388975       1 aggregator.go:171] initial CRD sync complete...
	I1003 19:23:35.389052       1 autoregister_controller.go:144] Starting autoregister controller
	I1003 19:23:35.392238       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1003 19:23:35.421199       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1003 19:23:35.433114       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1003 19:23:35.463680       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1003 19:23:35.463713       1 policy_source.go:240] refreshing policies
	I1003 19:23:35.468075       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1003 19:23:35.468929       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1003 19:23:35.475792       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1003 19:23:35.476899       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1003 19:23:35.476999       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1003 19:23:35.480402       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1003 19:23:35.480432       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1003 19:23:35.480560       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1003 19:23:35.486890       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1003 19:23:35.494806       1 cache.go:39] Caches are synced for autoregister controller
	I1003 19:23:36.074730       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1003 19:23:37.350782       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1003 19:23:41.763009       1 controller.go:667] quota admission added evaluator for: endpoints
	I1003 19:23:41.767151       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1003 19:23:41.770006       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1003 19:23:41.801721       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-apiserver [fd3fe7965793a71c3c6f9b9521b6b0c283e6b5ed6f1f5aee7fbfb482b5af6f32] <==
	I1003 19:22:28.696500       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1003 19:22:28.699359       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1003 19:22:28.703239       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1003 19:22:28.720834       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1003 19:22:28.721064       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1003 19:22:29.381470       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1003 19:22:29.386464       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1003 19:22:29.386490       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1003 19:22:30.256284       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1003 19:22:30.329953       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1003 19:22:30.501254       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1003 19:22:30.511348       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1003 19:22:30.513799       1 controller.go:667] quota admission added evaluator for: endpoints
	I1003 19:22:30.520467       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1003 19:22:30.582240       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1003 19:22:31.711880       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1003 19:22:31.765748       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1003 19:22:31.804093       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1003 19:22:36.335656       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1003 19:22:36.374269       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1003 19:22:36.494018       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1003 19:22:36.778534       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1003 19:23:23.291597       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	W1003 19:23:23.330698       1 logging.go:55] [core] [Channel #107 SubChannel #109]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1003 19:23:23.330865       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [5bc9d928c66f715d2cb955773ff9a4ceeac2d33a54d32a1544eac9d3e61700fe] <==
	I1003 19:23:38.702899       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1003 19:23:38.706630       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1003 19:23:38.708349       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1003 19:23:38.711597       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1003 19:23:38.712777       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1003 19:23:38.713942       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1003 19:23:38.715182       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1003 19:23:38.716425       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1003 19:23:38.718183       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1003 19:23:38.718502       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1003 19:23:38.720905       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1003 19:23:38.720926       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1003 19:23:38.724107       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1003 19:23:38.724202       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1003 19:23:38.736710       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1003 19:23:38.737143       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1003 19:23:38.737213       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1003 19:23:38.739760       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1003 19:23:38.742000       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1003 19:23:38.742070       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1003 19:23:38.742012       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1003 19:23:38.742035       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1003 19:23:38.742059       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1003 19:23:38.742047       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1003 19:23:38.743323       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	
	
	==> kube-controller-manager [fe077fc7b7398ab6a71e31a253a8c67d7227163b1d3d6d2ff769425cebd43420] <==
	I1003 19:22:35.473758       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1003 19:22:35.474831       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1003 19:22:35.474848       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1003 19:22:35.474860       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1003 19:22:35.474869       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1003 19:22:35.479465       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1003 19:22:35.474903       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1003 19:22:35.474894       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1003 19:22:35.483276       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1003 19:22:35.486013       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1003 19:22:35.488889       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1003 19:22:35.490117       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1003 19:22:35.490177       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1003 19:22:35.490227       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1003 19:22:35.490270       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1003 19:22:35.490316       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1003 19:22:35.498151       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1003 19:22:35.498576       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1003 19:22:35.518183       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1003 19:22:35.537140       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1003 19:22:35.573484       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1003 19:22:35.573506       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1003 19:22:35.573514       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1003 19:22:35.638041       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1003 19:23:20.479560       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [6168e29def1182e29c0bf294c1c3d7237309f9f85b32e17a56b611beab0de0f3] <==
	I1003 19:23:33.660179       1 server_linux.go:53] "Using iptables proxy"
	I1003 19:23:34.489557       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1003 19:23:35.414316       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes \"pause-844729\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1003 19:23:36.400598       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1003 19:23:36.402741       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1003 19:23:36.402951       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1003 19:23:36.503938       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1003 19:23:36.504052       1 server_linux.go:132] "Using iptables Proxier"
	I1003 19:23:36.513154       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1003 19:23:36.514787       1 server.go:527] "Version info" version="v1.34.1"
	I1003 19:23:36.515064       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1003 19:23:36.520215       1 config.go:200] "Starting service config controller"
	I1003 19:23:36.530296       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1003 19:23:36.520520       1 config.go:106] "Starting endpoint slice config controller"
	I1003 19:23:36.533032       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1003 19:23:36.533129       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1003 19:23:36.522491       1 config.go:309] "Starting node config controller"
	I1003 19:23:36.533241       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1003 19:23:36.533270       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1003 19:23:36.520535       1 config.go:403] "Starting serviceCIDR config controller"
	I1003 19:23:36.533337       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1003 19:23:36.533365       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1003 19:23:36.631165       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [857ea2e27fd5446162221b5717f5c41724882e4d6d67b73122cbadfde6751525] <==
	I1003 19:22:38.077697       1 server_linux.go:53] "Using iptables proxy"
	I1003 19:22:38.186479       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1003 19:22:38.288540       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1003 19:22:38.288578       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1003 19:22:38.288666       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1003 19:22:38.311328       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1003 19:22:38.311377       1 server_linux.go:132] "Using iptables Proxier"
	I1003 19:22:38.315794       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1003 19:22:38.316085       1 server.go:527] "Version info" version="v1.34.1"
	I1003 19:22:38.316105       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1003 19:22:38.317820       1 config.go:200] "Starting service config controller"
	I1003 19:22:38.317889       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1003 19:22:38.317951       1 config.go:106] "Starting endpoint slice config controller"
	I1003 19:22:38.317978       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1003 19:22:38.318015       1 config.go:403] "Starting serviceCIDR config controller"
	I1003 19:22:38.318045       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1003 19:22:38.318949       1 config.go:309] "Starting node config controller"
	I1003 19:22:38.321017       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1003 19:22:38.321088       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1003 19:22:38.419037       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1003 19:22:38.419049       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1003 19:22:38.419086       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [6f18ec5c83f04389f6cce9ba80e373f135129e84c9590239ca46414eb849a154] <==
	E1003 19:22:28.651336       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1003 19:22:28.651412       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1003 19:22:28.651486       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1003 19:22:28.651533       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1003 19:22:28.651648       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1003 19:22:28.656084       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1003 19:22:28.656966       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1003 19:22:29.466339       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1003 19:22:29.557013       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1003 19:22:29.647468       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1003 19:22:29.657717       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1003 19:22:29.672597       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1003 19:22:29.703029       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1003 19:22:29.714234       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1003 19:22:29.794980       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1003 19:22:29.807989       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1003 19:22:29.810050       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1003 19:22:29.832475       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	I1003 19:22:32.516621       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1003 19:23:23.308485       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1003 19:23:23.308587       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1003 19:23:23.309647       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1003 19:23:23.310772       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1003 19:23:23.311379       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1003 19:23:23.311456       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [da0cffc30d07d485c02b0ec61d8a9b3909ac227213b2060ee5749f2e4c309f14] <==
	I1003 19:23:35.373580       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1003 19:23:35.376172       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1003 19:23:35.376680       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1003 19:23:35.376775       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1003 19:23:35.376824       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1003 19:23:35.383702       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1003 19:23:35.383867       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1003 19:23:35.389208       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1003 19:23:35.389319       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1003 19:23:35.389412       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1003 19:23:35.389824       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1003 19:23:35.389950       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1003 19:23:35.390029       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1003 19:23:35.390109       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1003 19:23:35.390189       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1003 19:23:35.390335       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1003 19:23:35.390471       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1003 19:23:35.398879       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1003 19:23:35.399037       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1003 19:23:35.405004       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1003 19:23:35.405232       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1003 19:23:35.405382       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1003 19:23:35.405520       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1003 19:23:35.405694       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	I1003 19:23:36.977899       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 03 19:23:31 pause-844729 kubelet[1312]: E1003 19:23:31.037386    1312 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kindnet-qhksz\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="0596aa14-3857-4ba6-a81c-11b8c29baf94" pod="kube-system/kindnet-qhksz"
	Oct 03 19:23:31 pause-844729 kubelet[1312]: E1003 19:23:31.037667    1312 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-z7pwb\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="427f1d63-2b09-401a-b2f3-2e2a8248c11e" pod="kube-system/coredns-66bc5c9577-z7pwb"
	Oct 03 19:23:31 pause-844729 kubelet[1312]: I1003 19:23:31.042418    1312 scope.go:117] "RemoveContainer" containerID="6f18ec5c83f04389f6cce9ba80e373f135129e84c9590239ca46414eb849a154"
	Oct 03 19:23:31 pause-844729 kubelet[1312]: E1003 19:23:31.043727    1312 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-844729\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="c713f73feb229ea0aeb655c8766a710f" pod="kube-system/etcd-pause-844729"
	Oct 03 19:23:31 pause-844729 kubelet[1312]: E1003 19:23:31.044212    1312 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-844729\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="f67d1dd0c885c384aa5abf187316f922" pod="kube-system/kube-scheduler-pause-844729"
	Oct 03 19:23:31 pause-844729 kubelet[1312]: E1003 19:23:31.044555    1312 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-844729\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="70ebf733007858dadb749b90ec6fad45" pod="kube-system/kube-apiserver-pause-844729"
	Oct 03 19:23:31 pause-844729 kubelet[1312]: E1003 19:23:31.047766    1312 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-844729\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="b7eccba1dcf39d64a52249a54fe30caa" pod="kube-system/kube-controller-manager-pause-844729"
	Oct 03 19:23:31 pause-844729 kubelet[1312]: E1003 19:23:31.048369    1312 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vxnlc\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="b9fa1a51-79ed-470a-a56a-1d830b23760e" pod="kube-system/kube-proxy-vxnlc"
	Oct 03 19:23:31 pause-844729 kubelet[1312]: E1003 19:23:31.048707    1312 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kindnet-qhksz\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="0596aa14-3857-4ba6-a81c-11b8c29baf94" pod="kube-system/kindnet-qhksz"
	Oct 03 19:23:31 pause-844729 kubelet[1312]: E1003 19:23:31.049055    1312 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-z7pwb\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="427f1d63-2b09-401a-b2f3-2e2a8248c11e" pod="kube-system/coredns-66bc5c9577-z7pwb"
	Oct 03 19:23:35 pause-844729 kubelet[1312]: E1003 19:23:35.289895    1312 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-844729\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-844729' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Oct 03 19:23:35 pause-844729 kubelet[1312]: E1003 19:23:35.290051    1312 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-qhksz\" is forbidden: User \"system:node:pause-844729\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-844729' and this object" podUID="0596aa14-3857-4ba6-a81c-11b8c29baf94" pod="kube-system/kindnet-qhksz"
	Oct 03 19:23:35 pause-844729 kubelet[1312]: E1003 19:23:35.299999    1312 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-z7pwb\" is forbidden: User \"system:node:pause-844729\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-844729' and this object" podUID="427f1d63-2b09-401a-b2f3-2e2a8248c11e" pod="kube-system/coredns-66bc5c9577-z7pwb"
	Oct 03 19:23:35 pause-844729 kubelet[1312]: E1003 19:23:35.325838    1312 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-844729\" is forbidden: User \"system:node:pause-844729\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-844729' and this object" podUID="c713f73feb229ea0aeb655c8766a710f" pod="kube-system/etcd-pause-844729"
	Oct 03 19:23:35 pause-844729 kubelet[1312]: E1003 19:23:35.344560    1312 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-844729\" is forbidden: User \"system:node:pause-844729\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-844729' and this object" podUID="f67d1dd0c885c384aa5abf187316f922" pod="kube-system/kube-scheduler-pause-844729"
	Oct 03 19:23:35 pause-844729 kubelet[1312]: E1003 19:23:35.350668    1312 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-844729\" is forbidden: User \"system:node:pause-844729\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-844729' and this object" podUID="70ebf733007858dadb749b90ec6fad45" pod="kube-system/kube-apiserver-pause-844729"
	Oct 03 19:23:35 pause-844729 kubelet[1312]: E1003 19:23:35.357089    1312 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-844729\" is forbidden: User \"system:node:pause-844729\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-844729' and this object" podUID="b7eccba1dcf39d64a52249a54fe30caa" pod="kube-system/kube-controller-manager-pause-844729"
	Oct 03 19:23:35 pause-844729 kubelet[1312]: E1003 19:23:35.359546    1312 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-vxnlc\" is forbidden: User \"system:node:pause-844729\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-844729' and this object" podUID="b9fa1a51-79ed-470a-a56a-1d830b23760e" pod="kube-system/kube-proxy-vxnlc"
	Oct 03 19:23:35 pause-844729 kubelet[1312]: E1003 19:23:35.361419    1312 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-qhksz\" is forbidden: User \"system:node:pause-844729\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-844729' and this object" podUID="0596aa14-3857-4ba6-a81c-11b8c29baf94" pod="kube-system/kindnet-qhksz"
	Oct 03 19:23:35 pause-844729 kubelet[1312]: E1003 19:23:35.366680    1312 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-z7pwb\" is forbidden: User \"system:node:pause-844729\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-844729' and this object" podUID="427f1d63-2b09-401a-b2f3-2e2a8248c11e" pod="kube-system/coredns-66bc5c9577-z7pwb"
	Oct 03 19:23:35 pause-844729 kubelet[1312]: E1003 19:23:35.372656    1312 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-844729\" is forbidden: User \"system:node:pause-844729\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-844729' and this object" podUID="c713f73feb229ea0aeb655c8766a710f" pod="kube-system/etcd-pause-844729"
	Oct 03 19:23:35 pause-844729 kubelet[1312]: E1003 19:23:35.382981    1312 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-844729\" is forbidden: User \"system:node:pause-844729\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-844729' and this object" podUID="f67d1dd0c885c384aa5abf187316f922" pod="kube-system/kube-scheduler-pause-844729"
	Oct 03 19:23:47 pause-844729 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 03 19:23:47 pause-844729 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 03 19:23:47 pause-844729 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-844729 -n pause-844729
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-844729 -n pause-844729: exit status 2 (335.168678ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-844729 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-844729
helpers_test.go:243: (dbg) docker inspect pause-844729:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cf7ab3517f2ae7a4937862ee8f7ee047bfc4b9bfc4b810b5ba6c94cbfa68c39b",
	        "Created": "2025-10-03T19:22:03.195479704Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 440675,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-03T19:22:03.257980913Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/cf7ab3517f2ae7a4937862ee8f7ee047bfc4b9bfc4b810b5ba6c94cbfa68c39b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cf7ab3517f2ae7a4937862ee8f7ee047bfc4b9bfc4b810b5ba6c94cbfa68c39b/hostname",
	        "HostsPath": "/var/lib/docker/containers/cf7ab3517f2ae7a4937862ee8f7ee047bfc4b9bfc4b810b5ba6c94cbfa68c39b/hosts",
	        "LogPath": "/var/lib/docker/containers/cf7ab3517f2ae7a4937862ee8f7ee047bfc4b9bfc4b810b5ba6c94cbfa68c39b/cf7ab3517f2ae7a4937862ee8f7ee047bfc4b9bfc4b810b5ba6c94cbfa68c39b-json.log",
	        "Name": "/pause-844729",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-844729:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-844729",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "cf7ab3517f2ae7a4937862ee8f7ee047bfc4b9bfc4b810b5ba6c94cbfa68c39b",
	                "LowerDir": "/var/lib/docker/overlay2/871bedc0b2467036df02d8ce1022320cdf3e756fab32cce3ba1f1d98f9e27236-init/diff:/var/lib/docker/overlay2/87b205803817b0b71a214d995ab7e10a92033bbf72d76d6e052f1d21ccecb313/diff",
	                "MergedDir": "/var/lib/docker/overlay2/871bedc0b2467036df02d8ce1022320cdf3e756fab32cce3ba1f1d98f9e27236/merged",
	                "UpperDir": "/var/lib/docker/overlay2/871bedc0b2467036df02d8ce1022320cdf3e756fab32cce3ba1f1d98f9e27236/diff",
	                "WorkDir": "/var/lib/docker/overlay2/871bedc0b2467036df02d8ce1022320cdf3e756fab32cce3ba1f1d98f9e27236/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-844729",
	                "Source": "/var/lib/docker/volumes/pause-844729/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-844729",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-844729",
	                "name.minikube.sigs.k8s.io": "pause-844729",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b44380d16e7c66063187e333169732332b7b40d6df7765ffcbe77905fb69a74e",
	            "SandboxKey": "/var/run/docker/netns/b44380d16e7c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33393"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33394"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33397"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33395"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33396"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-844729": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9e:d2:ab:af:fa:f4",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "153e463ae4daea096ce512cd0e3e6b4feb726d8b0603650996676d765451008a",
	                    "EndpointID": "b38d57ee71c71fb502cdc51842b3532b71f3ecea7ac38ef020b07be637cff560",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-844729",
	                        "cf7ab3517f2a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-844729 -n pause-844729
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-844729 -n pause-844729: exit status 2 (328.371762ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-844729 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-844729 logs -n 25: (1.370155127s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-929800 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                    │ NoKubernetes-929800       │ jenkins │ v1.37.0 │ 03 Oct 25 19:17 UTC │ 03 Oct 25 19:18 UTC │
	│ start   │ -p missing-upgrade-546147 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-546147    │ jenkins │ v1.32.0 │ 03 Oct 25 19:17 UTC │ 03 Oct 25 19:18 UTC │
	│ start   │ -p NoKubernetes-929800 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-929800       │ jenkins │ v1.37.0 │ 03 Oct 25 19:18 UTC │ 03 Oct 25 19:19 UTC │
	│ start   │ -p missing-upgrade-546147 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-546147    │ jenkins │ v1.37.0 │ 03 Oct 25 19:19 UTC │ 03 Oct 25 19:19 UTC │
	│ delete  │ -p NoKubernetes-929800                                                                                                                   │ NoKubernetes-929800       │ jenkins │ v1.37.0 │ 03 Oct 25 19:19 UTC │ 03 Oct 25 19:19 UTC │
	│ start   │ -p NoKubernetes-929800 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-929800       │ jenkins │ v1.37.0 │ 03 Oct 25 19:19 UTC │ 03 Oct 25 19:19 UTC │
	│ ssh     │ -p NoKubernetes-929800 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-929800       │ jenkins │ v1.37.0 │ 03 Oct 25 19:19 UTC │                     │
	│ stop    │ -p NoKubernetes-929800                                                                                                                   │ NoKubernetes-929800       │ jenkins │ v1.37.0 │ 03 Oct 25 19:19 UTC │ 03 Oct 25 19:19 UTC │
	│ start   │ -p NoKubernetes-929800 --driver=docker  --container-runtime=crio                                                                         │ NoKubernetes-929800       │ jenkins │ v1.37.0 │ 03 Oct 25 19:19 UTC │ 03 Oct 25 19:19 UTC │
	│ ssh     │ -p NoKubernetes-929800 sudo systemctl is-active --quiet service kubelet                                                                  │ NoKubernetes-929800       │ jenkins │ v1.37.0 │ 03 Oct 25 19:19 UTC │                     │
	│ delete  │ -p NoKubernetes-929800                                                                                                                   │ NoKubernetes-929800       │ jenkins │ v1.37.0 │ 03 Oct 25 19:19 UTC │ 03 Oct 25 19:19 UTC │
	│ start   │ -p kubernetes-upgrade-629875 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-629875 │ jenkins │ v1.37.0 │ 03 Oct 25 19:19 UTC │ 03 Oct 25 19:20 UTC │
	│ delete  │ -p missing-upgrade-546147                                                                                                                │ missing-upgrade-546147    │ jenkins │ v1.37.0 │ 03 Oct 25 19:19 UTC │ 03 Oct 25 19:19 UTC │
	│ start   │ -p stopped-upgrade-414530 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-414530    │ jenkins │ v1.32.0 │ 03 Oct 25 19:19 UTC │ 03 Oct 25 19:20 UTC │
	│ stop    │ -p kubernetes-upgrade-629875                                                                                                             │ kubernetes-upgrade-629875 │ jenkins │ v1.37.0 │ 03 Oct 25 19:20 UTC │ 03 Oct 25 19:20 UTC │
	│ start   │ -p kubernetes-upgrade-629875 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-629875 │ jenkins │ v1.37.0 │ 03 Oct 25 19:20 UTC │                     │
	│ stop    │ stopped-upgrade-414530 stop                                                                                                              │ stopped-upgrade-414530    │ jenkins │ v1.32.0 │ 03 Oct 25 19:20 UTC │ 03 Oct 25 19:20 UTC │
	│ start   │ -p stopped-upgrade-414530 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-414530    │ jenkins │ v1.37.0 │ 03 Oct 25 19:20 UTC │ 03 Oct 25 19:20 UTC │
	│ delete  │ -p stopped-upgrade-414530                                                                                                                │ stopped-upgrade-414530    │ jenkins │ v1.37.0 │ 03 Oct 25 19:20 UTC │ 03 Oct 25 19:21 UTC │
	│ start   │ -p running-upgrade-024862 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ running-upgrade-024862    │ jenkins │ v1.32.0 │ 03 Oct 25 19:21 UTC │ 03 Oct 25 19:21 UTC │
	│ start   │ -p running-upgrade-024862 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ running-upgrade-024862    │ jenkins │ v1.37.0 │ 03 Oct 25 19:21 UTC │ 03 Oct 25 19:21 UTC │
	│ delete  │ -p running-upgrade-024862                                                                                                                │ running-upgrade-024862    │ jenkins │ v1.37.0 │ 03 Oct 25 19:21 UTC │ 03 Oct 25 19:21 UTC │
	│ start   │ -p pause-844729 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-844729              │ jenkins │ v1.37.0 │ 03 Oct 25 19:21 UTC │ 03 Oct 25 19:23 UTC │
	│ start   │ -p pause-844729 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-844729              │ jenkins │ v1.37.0 │ 03 Oct 25 19:23 UTC │ 03 Oct 25 19:23 UTC │
	│ pause   │ -p pause-844729 --alsologtostderr -v=5                                                                                                   │ pause-844729              │ jenkins │ v1.37.0 │ 03 Oct 25 19:23 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/03 19:23:22
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 19:23:22.033066  444809 out.go:360] Setting OutFile to fd 1 ...
	I1003 19:23:22.033250  444809 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 19:23:22.033261  444809 out.go:374] Setting ErrFile to fd 2...
	I1003 19:23:22.033267  444809 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 19:23:22.033536  444809 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-284583/.minikube/bin
	I1003 19:23:22.033936  444809 out.go:368] Setting JSON to false
	I1003 19:23:22.034984  444809 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7553,"bootTime":1759511849,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1003 19:23:22.035065  444809 start.go:140] virtualization:  
	I1003 19:23:22.040219  444809 out.go:179] * [pause-844729] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1003 19:23:22.043386  444809 out.go:179]   - MINIKUBE_LOCATION=21625
	I1003 19:23:22.043432  444809 notify.go:220] Checking for updates...
	I1003 19:23:22.046466  444809 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 19:23:22.049438  444809 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21625-284583/kubeconfig
	I1003 19:23:22.052277  444809 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-284583/.minikube
	I1003 19:23:22.055714  444809 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1003 19:23:22.058728  444809 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 19:23:22.062457  444809 config.go:182] Loaded profile config "pause-844729": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 19:23:22.063052  444809 driver.go:421] Setting default libvirt URI to qemu:///system
	I1003 19:23:22.088856  444809 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1003 19:23:22.088973  444809 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 19:23:22.161081  444809 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-03 19:23:22.150870956 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1003 19:23:22.161194  444809 docker.go:318] overlay module found
	I1003 19:23:22.164385  444809 out.go:179] * Using the docker driver based on existing profile
	I1003 19:23:22.167204  444809 start.go:304] selected driver: docker
	I1003 19:23:22.167226  444809 start.go:924] validating driver "docker" against &{Name:pause-844729 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-844729 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 19:23:22.167368  444809 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 19:23:22.167488  444809 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 19:23:22.225202  444809 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-03 19:23:22.216482512 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1003 19:23:22.225617  444809 cni.go:84] Creating CNI manager for ""
	I1003 19:23:22.225682  444809 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 19:23:22.225733  444809 start.go:348] cluster config:
	{Name:pause-844729 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-844729 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 19:23:22.228804  444809 out.go:179] * Starting "pause-844729" primary control-plane node in "pause-844729" cluster
	I1003 19:23:22.231563  444809 cache.go:123] Beginning downloading kic base image for docker with crio
	I1003 19:23:22.234455  444809 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1003 19:23:22.237418  444809 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 19:23:22.237500  444809 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1003 19:23:22.237511  444809 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21625-284583/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1003 19:23:22.237673  444809 cache.go:58] Caching tarball of preloaded images
	I1003 19:23:22.237776  444809 preload.go:233] Found /home/jenkins/minikube-integration/21625-284583/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1003 19:23:22.237786  444809 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1003 19:23:22.237940  444809 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/pause-844729/config.json ...
	I1003 19:23:22.258016  444809 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1003 19:23:22.258039  444809 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1003 19:23:22.258052  444809 cache.go:232] Successfully downloaded all kic artifacts
	I1003 19:23:22.258077  444809 start.go:360] acquireMachinesLock for pause-844729: {Name:mk018320e2700ef01919004e8c23ac2ff4cc641e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 19:23:22.258140  444809 start.go:364] duration metric: took 37.367µs to acquireMachinesLock for "pause-844729"
	I1003 19:23:22.258163  444809 start.go:96] Skipping create...Using existing machine configuration
	I1003 19:23:22.258173  444809 fix.go:54] fixHost starting: 
	I1003 19:23:22.258441  444809 cli_runner.go:164] Run: docker container inspect pause-844729 --format={{.State.Status}}
	I1003 19:23:22.281538  444809 fix.go:112] recreateIfNeeded on pause-844729: state=Running err=<nil>
	W1003 19:23:22.281572  444809 fix.go:138] unexpected machine state, will restart: <nil>
	I1003 19:23:22.713958  432533 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1003 19:23:22.714335  432533 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1003 19:23:22.714375  432533 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 19:23:22.714435  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 19:23:22.740718  432533 cri.go:89] found id: "04bedbd6c6d1a6948852e8a02d927b6d181e1cdd8b926cadb512e6c5a9e2bc18"
	I1003 19:23:22.740764  432533 cri.go:89] found id: ""
	I1003 19:23:22.740773  432533 logs.go:282] 1 containers: [04bedbd6c6d1a6948852e8a02d927b6d181e1cdd8b926cadb512e6c5a9e2bc18]
	I1003 19:23:22.740830  432533 ssh_runner.go:195] Run: which crictl
	I1003 19:23:22.744518  432533 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 19:23:22.744590  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 19:23:22.788943  432533 cri.go:89] found id: ""
	I1003 19:23:22.788967  432533 logs.go:282] 0 containers: []
	W1003 19:23:22.788975  432533 logs.go:284] No container was found matching "etcd"
	I1003 19:23:22.788982  432533 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 19:23:22.789041  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 19:23:22.828675  432533 cri.go:89] found id: ""
	I1003 19:23:22.828704  432533 logs.go:282] 0 containers: []
	W1003 19:23:22.828713  432533 logs.go:284] No container was found matching "coredns"
	I1003 19:23:22.828719  432533 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 19:23:22.828798  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 19:23:22.860516  432533 cri.go:89] found id: "dc5ccb1606b8053522f591bfece54cc0b244422b0fc5af82da81d2215cabb3a1"
	I1003 19:23:22.860542  432533 cri.go:89] found id: ""
	I1003 19:23:22.860550  432533 logs.go:282] 1 containers: [dc5ccb1606b8053522f591bfece54cc0b244422b0fc5af82da81d2215cabb3a1]
	I1003 19:23:22.860603  432533 ssh_runner.go:195] Run: which crictl
	I1003 19:23:22.865945  432533 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 19:23:22.866012  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 19:23:22.903975  432533 cri.go:89] found id: ""
	I1003 19:23:22.903996  432533 logs.go:282] 0 containers: []
	W1003 19:23:22.904004  432533 logs.go:284] No container was found matching "kube-proxy"
	I1003 19:23:22.904011  432533 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 19:23:22.904067  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 19:23:22.936648  432533 cri.go:89] found id: "c62c95f3827d9847bcb196dcef4991555aba77e95ba4e6a5900a14faf7679b30"
	I1003 19:23:22.936668  432533 cri.go:89] found id: ""
	I1003 19:23:22.936676  432533 logs.go:282] 1 containers: [c62c95f3827d9847bcb196dcef4991555aba77e95ba4e6a5900a14faf7679b30]
	I1003 19:23:22.936747  432533 ssh_runner.go:195] Run: which crictl
	I1003 19:23:22.941753  432533 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 19:23:22.941822  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 19:23:22.973122  432533 cri.go:89] found id: ""
	I1003 19:23:22.973145  432533 logs.go:282] 0 containers: []
	W1003 19:23:22.973154  432533 logs.go:284] No container was found matching "kindnet"
	I1003 19:23:22.973161  432533 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1003 19:23:22.973216  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1003 19:23:23.004326  432533 cri.go:89] found id: ""
	I1003 19:23:23.004357  432533 logs.go:282] 0 containers: []
	W1003 19:23:23.004366  432533 logs.go:284] No container was found matching "storage-provisioner"
	I1003 19:23:23.004381  432533 logs.go:123] Gathering logs for CRI-O ...
	I1003 19:23:23.004392  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 19:23:23.098905  432533 logs.go:123] Gathering logs for container status ...
	I1003 19:23:23.098971  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 19:23:23.159167  432533 logs.go:123] Gathering logs for kubelet ...
	I1003 19:23:23.159194  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 19:23:23.309441  432533 logs.go:123] Gathering logs for dmesg ...
	I1003 19:23:23.309477  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 19:23:23.333940  432533 logs.go:123] Gathering logs for describe nodes ...
	I1003 19:23:23.334215  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 19:23:23.409300  432533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 19:23:23.409325  432533 logs.go:123] Gathering logs for kube-apiserver [04bedbd6c6d1a6948852e8a02d927b6d181e1cdd8b926cadb512e6c5a9e2bc18] ...
	I1003 19:23:23.409338  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 04bedbd6c6d1a6948852e8a02d927b6d181e1cdd8b926cadb512e6c5a9e2bc18"
	I1003 19:23:23.446864  432533 logs.go:123] Gathering logs for kube-scheduler [dc5ccb1606b8053522f591bfece54cc0b244422b0fc5af82da81d2215cabb3a1] ...
	I1003 19:23:23.446935  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dc5ccb1606b8053522f591bfece54cc0b244422b0fc5af82da81d2215cabb3a1"
	I1003 19:23:23.512406  432533 logs.go:123] Gathering logs for kube-controller-manager [c62c95f3827d9847bcb196dcef4991555aba77e95ba4e6a5900a14faf7679b30] ...
	I1003 19:23:23.512443  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c62c95f3827d9847bcb196dcef4991555aba77e95ba4e6a5900a14faf7679b30"
	I1003 19:23:26.041309  432533 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1003 19:23:26.041808  432533 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1003 19:23:26.041868  432533 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 19:23:26.041933  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 19:23:26.074274  432533 cri.go:89] found id: "04bedbd6c6d1a6948852e8a02d927b6d181e1cdd8b926cadb512e6c5a9e2bc18"
	I1003 19:23:26.074298  432533 cri.go:89] found id: ""
	I1003 19:23:26.074308  432533 logs.go:282] 1 containers: [04bedbd6c6d1a6948852e8a02d927b6d181e1cdd8b926cadb512e6c5a9e2bc18]
	I1003 19:23:26.074378  432533 ssh_runner.go:195] Run: which crictl
	I1003 19:23:26.078232  432533 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 19:23:26.078305  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 19:23:26.112453  432533 cri.go:89] found id: ""
	I1003 19:23:26.112518  432533 logs.go:282] 0 containers: []
	W1003 19:23:26.112538  432533 logs.go:284] No container was found matching "etcd"
	I1003 19:23:26.112560  432533 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 19:23:26.112647  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 19:23:26.139341  432533 cri.go:89] found id: ""
	I1003 19:23:26.139363  432533 logs.go:282] 0 containers: []
	W1003 19:23:26.139371  432533 logs.go:284] No container was found matching "coredns"
	I1003 19:23:26.139378  432533 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 19:23:26.139439  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 19:23:26.166984  432533 cri.go:89] found id: "dc5ccb1606b8053522f591bfece54cc0b244422b0fc5af82da81d2215cabb3a1"
	I1003 19:23:26.167007  432533 cri.go:89] found id: ""
	I1003 19:23:26.167016  432533 logs.go:282] 1 containers: [dc5ccb1606b8053522f591bfece54cc0b244422b0fc5af82da81d2215cabb3a1]
	I1003 19:23:26.167102  432533 ssh_runner.go:195] Run: which crictl
	I1003 19:23:26.171212  432533 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 19:23:26.171309  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 19:23:26.197654  432533 cri.go:89] found id: ""
	I1003 19:23:26.197679  432533 logs.go:282] 0 containers: []
	W1003 19:23:26.197688  432533 logs.go:284] No container was found matching "kube-proxy"
	I1003 19:23:26.197695  432533 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 19:23:26.197751  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 19:23:26.223438  432533 cri.go:89] found id: "c62c95f3827d9847bcb196dcef4991555aba77e95ba4e6a5900a14faf7679b30"
	I1003 19:23:26.223461  432533 cri.go:89] found id: ""
	I1003 19:23:26.223470  432533 logs.go:282] 1 containers: [c62c95f3827d9847bcb196dcef4991555aba77e95ba4e6a5900a14faf7679b30]
	I1003 19:23:26.223526  432533 ssh_runner.go:195] Run: which crictl
	I1003 19:23:26.227564  432533 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 19:23:26.227633  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 19:23:26.253982  432533 cri.go:89] found id: ""
	I1003 19:23:26.254061  432533 logs.go:282] 0 containers: []
	W1003 19:23:26.254076  432533 logs.go:284] No container was found matching "kindnet"
	I1003 19:23:26.254084  432533 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1003 19:23:26.254148  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1003 19:23:26.279329  432533 cri.go:89] found id: ""
	I1003 19:23:26.279352  432533 logs.go:282] 0 containers: []
	W1003 19:23:26.279361  432533 logs.go:284] No container was found matching "storage-provisioner"
	I1003 19:23:26.279372  432533 logs.go:123] Gathering logs for describe nodes ...
	I1003 19:23:26.279383  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1003 19:23:22.284802  444809 out.go:252] * Updating the running docker "pause-844729" container ...
	I1003 19:23:22.284835  444809 machine.go:93] provisionDockerMachine start ...
	I1003 19:23:22.284913  444809 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-844729
	I1003 19:23:22.303011  444809 main.go:141] libmachine: Using SSH client type: native
	I1003 19:23:22.303336  444809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33393 <nil> <nil>}
	I1003 19:23:22.303351  444809 main.go:141] libmachine: About to run SSH command:
	hostname
	I1003 19:23:22.436262  444809 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-844729
	
	I1003 19:23:22.436294  444809 ubuntu.go:182] provisioning hostname "pause-844729"
	I1003 19:23:22.436355  444809 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-844729
	I1003 19:23:22.454966  444809 main.go:141] libmachine: Using SSH client type: native
	I1003 19:23:22.455280  444809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33393 <nil> <nil>}
	I1003 19:23:22.455294  444809 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-844729 && echo "pause-844729" | sudo tee /etc/hostname
	I1003 19:23:22.597723  444809 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-844729
	
	I1003 19:23:22.597874  444809 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-844729
	I1003 19:23:22.616212  444809 main.go:141] libmachine: Using SSH client type: native
	I1003 19:23:22.616547  444809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33393 <nil> <nil>}
	I1003 19:23:22.616563  444809 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-844729' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-844729/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-844729' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 19:23:22.757706  444809 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 19:23:22.757789  444809 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21625-284583/.minikube CaCertPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21625-284583/.minikube}
	I1003 19:23:22.757848  444809 ubuntu.go:190] setting up certificates
	I1003 19:23:22.757877  444809 provision.go:84] configureAuth start
	I1003 19:23:22.757966  444809 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-844729
	I1003 19:23:22.779781  444809 provision.go:143] copyHostCerts
	I1003 19:23:22.779847  444809 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-284583/.minikube/ca.pem, removing ...
	I1003 19:23:22.779864  444809 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-284583/.minikube/ca.pem
	I1003 19:23:22.779959  444809 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21625-284583/.minikube/ca.pem (1082 bytes)
	I1003 19:23:22.780067  444809 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-284583/.minikube/cert.pem, removing ...
	I1003 19:23:22.780074  444809 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-284583/.minikube/cert.pem
	I1003 19:23:22.780113  444809 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21625-284583/.minikube/cert.pem (1123 bytes)
	I1003 19:23:22.780175  444809 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-284583/.minikube/key.pem, removing ...
	I1003 19:23:22.780180  444809 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-284583/.minikube/key.pem
	I1003 19:23:22.780202  444809 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21625-284583/.minikube/key.pem (1675 bytes)
	I1003 19:23:22.780246  444809 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21625-284583/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca-key.pem org=jenkins.pause-844729 san=[127.0.0.1 192.168.76.2 localhost minikube pause-844729]
	I1003 19:23:22.856374  444809 provision.go:177] copyRemoteCerts
	I1003 19:23:22.856447  444809 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 19:23:22.856492  444809 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-844729
	I1003 19:23:22.882359  444809 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33393 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/pause-844729/id_rsa Username:docker}
	I1003 19:23:22.988979  444809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 19:23:23.012714  444809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1003 19:23:23.038510  444809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1003 19:23:23.062449  444809 provision.go:87] duration metric: took 304.536804ms to configureAuth
	I1003 19:23:23.062516  444809 ubuntu.go:206] setting minikube options for container-runtime
	I1003 19:23:23.062752  444809 config.go:182] Loaded profile config "pause-844729": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 19:23:23.062884  444809 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-844729
	I1003 19:23:23.094442  444809 main.go:141] libmachine: Using SSH client type: native
	I1003 19:23:23.094740  444809 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33393 <nil> <nil>}
	I1003 19:23:23.094755  444809 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1003 19:23:28.448152  444809 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1003 19:23:28.448174  444809 machine.go:96] duration metric: took 6.163330744s to provisionDockerMachine
	I1003 19:23:28.448184  444809 start.go:293] postStartSetup for "pause-844729" (driver="docker")
	I1003 19:23:28.448195  444809 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 19:23:28.448254  444809 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 19:23:28.448296  444809 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-844729
	I1003 19:23:28.466757  444809 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33393 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/pause-844729/id_rsa Username:docker}
	I1003 19:23:28.564834  444809 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 19:23:28.568490  444809 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1003 19:23:28.568517  444809 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1003 19:23:28.568528  444809 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-284583/.minikube/addons for local assets ...
	I1003 19:23:28.568606  444809 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-284583/.minikube/files for local assets ...
	I1003 19:23:28.568764  444809 filesync.go:149] local asset: /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem -> 2864342.pem in /etc/ssl/certs
	I1003 19:23:28.568885  444809 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 19:23:28.576606  444809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem --> /etc/ssl/certs/2864342.pem (1708 bytes)
	I1003 19:23:28.595345  444809 start.go:296] duration metric: took 147.14596ms for postStartSetup
	I1003 19:23:28.595449  444809 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 19:23:28.595496  444809 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-844729
	I1003 19:23:28.612849  444809 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33393 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/pause-844729/id_rsa Username:docker}
	I1003 19:23:28.706217  444809 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1003 19:23:28.711532  444809 fix.go:56] duration metric: took 6.453351887s for fixHost
	I1003 19:23:28.711557  444809 start.go:83] releasing machines lock for "pause-844729", held for 6.453404401s
	I1003 19:23:28.711627  444809 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-844729
	I1003 19:23:28.728457  444809 ssh_runner.go:195] Run: cat /version.json
	I1003 19:23:28.728516  444809 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-844729
	I1003 19:23:28.728568  444809 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 19:23:28.728618  444809 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-844729
	I1003 19:23:28.750271  444809 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33393 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/pause-844729/id_rsa Username:docker}
	I1003 19:23:28.752710  444809 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33393 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/pause-844729/id_rsa Username:docker}
	I1003 19:23:28.931589  444809 ssh_runner.go:195] Run: systemctl --version
	I1003 19:23:28.938369  444809 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1003 19:23:28.980055  444809 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1003 19:23:28.984623  444809 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 19:23:28.984776  444809 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 19:23:28.993632  444809 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1003 19:23:28.993660  444809 start.go:495] detecting cgroup driver to use...
	I1003 19:23:28.993706  444809 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1003 19:23:28.993757  444809 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 19:23:29.009689  444809 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 19:23:29.023549  444809 docker.go:218] disabling cri-docker service (if available) ...
	I1003 19:23:29.023658  444809 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1003 19:23:29.040348  444809 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1003 19:23:29.055435  444809 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1003 19:23:29.207437  444809 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1003 19:23:29.389654  444809 docker.go:234] disabling docker service ...
	I1003 19:23:29.389757  444809 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1003 19:23:29.407786  444809 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1003 19:23:29.422740  444809 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1003 19:23:29.593207  444809 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1003 19:23:29.780047  444809 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1003 19:23:29.796287  444809 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 19:23:29.818369  444809 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1003 19:23:29.818466  444809 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:23:29.827915  444809 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1003 19:23:29.828026  444809 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:23:29.838355  444809 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:23:29.847682  444809 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:23:29.857892  444809 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 19:23:29.866906  444809 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:23:29.876275  444809 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:23:29.885681  444809 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:23:29.895326  444809 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 19:23:29.903702  444809 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 19:23:29.912679  444809 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 19:23:30.117055  444809 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1003 19:23:30.313038  444809 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1003 19:23:30.313146  444809 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1003 19:23:30.318062  444809 start.go:563] Will wait 60s for crictl version
	I1003 19:23:30.318184  444809 ssh_runner.go:195] Run: which crictl
	I1003 19:23:30.322157  444809 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1003 19:23:30.347079  444809 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1003 19:23:30.347186  444809 ssh_runner.go:195] Run: crio --version
	I1003 19:23:30.380606  444809 ssh_runner.go:195] Run: crio --version
	I1003 19:23:30.416056  444809 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1003 19:23:26.344626  432533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 19:23:26.344646  432533 logs.go:123] Gathering logs for kube-apiserver [04bedbd6c6d1a6948852e8a02d927b6d181e1cdd8b926cadb512e6c5a9e2bc18] ...
	I1003 19:23:26.344659  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 04bedbd6c6d1a6948852e8a02d927b6d181e1cdd8b926cadb512e6c5a9e2bc18"
	I1003 19:23:26.378186  432533 logs.go:123] Gathering logs for kube-scheduler [dc5ccb1606b8053522f591bfece54cc0b244422b0fc5af82da81d2215cabb3a1] ...
	I1003 19:23:26.378265  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dc5ccb1606b8053522f591bfece54cc0b244422b0fc5af82da81d2215cabb3a1"
	I1003 19:23:26.433575  432533 logs.go:123] Gathering logs for kube-controller-manager [c62c95f3827d9847bcb196dcef4991555aba77e95ba4e6a5900a14faf7679b30] ...
	I1003 19:23:26.433620  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c62c95f3827d9847bcb196dcef4991555aba77e95ba4e6a5900a14faf7679b30"
	I1003 19:23:26.459572  432533 logs.go:123] Gathering logs for CRI-O ...
	I1003 19:23:26.459600  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 19:23:26.519115  432533 logs.go:123] Gathering logs for container status ...
	I1003 19:23:26.519150  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 19:23:26.551327  432533 logs.go:123] Gathering logs for kubelet ...
	I1003 19:23:26.551356  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 19:23:26.665675  432533 logs.go:123] Gathering logs for dmesg ...
	I1003 19:23:26.665718  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 19:23:29.184584  432533 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1003 19:23:29.185015  432533 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1003 19:23:29.185068  432533 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 19:23:29.185121  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 19:23:29.217379  432533 cri.go:89] found id: "04bedbd6c6d1a6948852e8a02d927b6d181e1cdd8b926cadb512e6c5a9e2bc18"
	I1003 19:23:29.217398  432533 cri.go:89] found id: ""
	I1003 19:23:29.217406  432533 logs.go:282] 1 containers: [04bedbd6c6d1a6948852e8a02d927b6d181e1cdd8b926cadb512e6c5a9e2bc18]
	I1003 19:23:29.217462  432533 ssh_runner.go:195] Run: which crictl
	I1003 19:23:29.222192  432533 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 19:23:29.222271  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 19:23:29.253807  432533 cri.go:89] found id: ""
	I1003 19:23:29.253828  432533 logs.go:282] 0 containers: []
	W1003 19:23:29.253836  432533 logs.go:284] No container was found matching "etcd"
	I1003 19:23:29.253842  432533 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 19:23:29.253912  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 19:23:29.319033  432533 cri.go:89] found id: ""
	I1003 19:23:29.319061  432533 logs.go:282] 0 containers: []
	W1003 19:23:29.319070  432533 logs.go:284] No container was found matching "coredns"
	I1003 19:23:29.319076  432533 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 19:23:29.319130  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 19:23:29.352108  432533 cri.go:89] found id: "dc5ccb1606b8053522f591bfece54cc0b244422b0fc5af82da81d2215cabb3a1"
	I1003 19:23:29.352126  432533 cri.go:89] found id: ""
	I1003 19:23:29.352134  432533 logs.go:282] 1 containers: [dc5ccb1606b8053522f591bfece54cc0b244422b0fc5af82da81d2215cabb3a1]
	I1003 19:23:29.352206  432533 ssh_runner.go:195] Run: which crictl
	I1003 19:23:29.356568  432533 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 19:23:29.356642  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 19:23:29.397498  432533 cri.go:89] found id: ""
	I1003 19:23:29.397519  432533 logs.go:282] 0 containers: []
	W1003 19:23:29.397533  432533 logs.go:284] No container was found matching "kube-proxy"
	I1003 19:23:29.397540  432533 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 19:23:29.397597  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 19:23:29.434151  432533 cri.go:89] found id: "c62c95f3827d9847bcb196dcef4991555aba77e95ba4e6a5900a14faf7679b30"
	I1003 19:23:29.434225  432533 cri.go:89] found id: ""
	I1003 19:23:29.434249  432533 logs.go:282] 1 containers: [c62c95f3827d9847bcb196dcef4991555aba77e95ba4e6a5900a14faf7679b30]
	I1003 19:23:29.434319  432533 ssh_runner.go:195] Run: which crictl
	I1003 19:23:29.438661  432533 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 19:23:29.438739  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 19:23:29.485725  432533 cri.go:89] found id: ""
	I1003 19:23:29.485789  432533 logs.go:282] 0 containers: []
	W1003 19:23:29.485811  432533 logs.go:284] No container was found matching "kindnet"
	I1003 19:23:29.485844  432533 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1003 19:23:29.485924  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1003 19:23:29.522585  432533 cri.go:89] found id: ""
	I1003 19:23:29.522671  432533 logs.go:282] 0 containers: []
	W1003 19:23:29.522696  432533 logs.go:284] No container was found matching "storage-provisioner"
	I1003 19:23:29.522730  432533 logs.go:123] Gathering logs for kube-controller-manager [c62c95f3827d9847bcb196dcef4991555aba77e95ba4e6a5900a14faf7679b30] ...
	I1003 19:23:29.522760  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c62c95f3827d9847bcb196dcef4991555aba77e95ba4e6a5900a14faf7679b30"
	I1003 19:23:29.571044  432533 logs.go:123] Gathering logs for CRI-O ...
	I1003 19:23:29.571127  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 19:23:29.635213  432533 logs.go:123] Gathering logs for container status ...
	I1003 19:23:29.635291  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 19:23:29.676858  432533 logs.go:123] Gathering logs for kubelet ...
	I1003 19:23:29.676927  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 19:23:29.800583  432533 logs.go:123] Gathering logs for dmesg ...
	I1003 19:23:29.800648  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 19:23:29.817324  432533 logs.go:123] Gathering logs for describe nodes ...
	I1003 19:23:29.817474  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 19:23:29.912526  432533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 19:23:29.912605  432533 logs.go:123] Gathering logs for kube-apiserver [04bedbd6c6d1a6948852e8a02d927b6d181e1cdd8b926cadb512e6c5a9e2bc18] ...
	I1003 19:23:29.912810  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 04bedbd6c6d1a6948852e8a02d927b6d181e1cdd8b926cadb512e6c5a9e2bc18"
	I1003 19:23:29.958206  432533 logs.go:123] Gathering logs for kube-scheduler [dc5ccb1606b8053522f591bfece54cc0b244422b0fc5af82da81d2215cabb3a1] ...
	I1003 19:23:29.958239  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dc5ccb1606b8053522f591bfece54cc0b244422b0fc5af82da81d2215cabb3a1"
	I1003 19:23:30.418944  444809 cli_runner.go:164] Run: docker network inspect pause-844729 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 19:23:30.434771  444809 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1003 19:23:30.438704  444809 kubeadm.go:883] updating cluster {Name:pause-844729 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-844729 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1003 19:23:30.438833  444809 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 19:23:30.438884  444809 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 19:23:30.472115  444809 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 19:23:30.472141  444809 crio.go:433] Images already preloaded, skipping extraction
	I1003 19:23:30.472195  444809 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 19:23:30.497303  444809 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 19:23:30.497326  444809 cache_images.go:85] Images are preloaded, skipping loading
	I1003 19:23:30.497334  444809 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1003 19:23:30.497448  444809 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-844729 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-844729 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1003 19:23:30.497529  444809 ssh_runner.go:195] Run: crio config
	I1003 19:23:30.568383  444809 cni.go:84] Creating CNI manager for ""
	I1003 19:23:30.568459  444809 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 19:23:30.568490  444809 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1003 19:23:30.568537  444809 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-844729 NodeName:pause-844729 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1003 19:23:30.568707  444809 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-844729"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 19:23:30.568842  444809 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1003 19:23:30.579556  444809 binaries.go:44] Found k8s binaries, skipping transfer
	I1003 19:23:30.579653  444809 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1003 19:23:30.587738  444809 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1003 19:23:30.600325  444809 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 19:23:30.613471  444809 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1003 19:23:30.626741  444809 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1003 19:23:30.630607  444809 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 19:23:30.765830  444809 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 19:23:30.779527  444809 certs.go:69] Setting up /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/pause-844729 for IP: 192.168.76.2
	I1003 19:23:30.779549  444809 certs.go:195] generating shared ca certs ...
	I1003 19:23:30.779565  444809 certs.go:227] acquiring lock for ca certs: {Name:mk5a10e6c921326e9c211447576eaeb893259ba7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:23:30.779750  444809 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21625-284583/.minikube/ca.key
	I1003 19:23:30.779811  444809 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.key
	I1003 19:23:30.779827  444809 certs.go:257] generating profile certs ...
	I1003 19:23:30.779950  444809 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/pause-844729/client.key
	I1003 19:23:30.780063  444809 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/pause-844729/apiserver.key.62249f20
	I1003 19:23:30.780141  444809 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/pause-844729/proxy-client.key
	I1003 19:23:30.780294  444809 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/286434.pem (1338 bytes)
	W1003 19:23:30.780350  444809 certs.go:480] ignoring /home/jenkins/minikube-integration/21625-284583/.minikube/certs/286434_empty.pem, impossibly tiny 0 bytes
	I1003 19:23:30.780366  444809 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 19:23:30.780395  444809 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem (1082 bytes)
	I1003 19:23:30.780452  444809 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem (1123 bytes)
	I1003 19:23:30.780485  444809 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/key.pem (1675 bytes)
	I1003 19:23:30.780564  444809 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem (1708 bytes)
	I1003 19:23:30.781265  444809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 19:23:30.800060  444809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1003 19:23:30.817096  444809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 19:23:30.834124  444809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1003 19:23:30.851779  444809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/pause-844729/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1003 19:23:30.869243  444809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/pause-844729/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1003 19:23:30.886500  444809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/pause-844729/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 19:23:30.903838  444809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/pause-844729/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1003 19:23:30.921060  444809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/certs/286434.pem --> /usr/share/ca-certificates/286434.pem (1338 bytes)
	I1003 19:23:30.938097  444809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem --> /usr/share/ca-certificates/2864342.pem (1708 bytes)
	I1003 19:23:30.962454  444809 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 19:23:31.000431  444809 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 19:23:31.026636  444809 ssh_runner.go:195] Run: openssl version
	I1003 19:23:31.035020  444809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/286434.pem && ln -fs /usr/share/ca-certificates/286434.pem /etc/ssl/certs/286434.pem"
	I1003 19:23:31.046323  444809 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/286434.pem
	I1003 19:23:31.051386  444809 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  3 18:34 /usr/share/ca-certificates/286434.pem
	I1003 19:23:31.051513  444809 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/286434.pem
	I1003 19:23:31.180772  444809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/286434.pem /etc/ssl/certs/51391683.0"
	I1003 19:23:31.201717  444809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2864342.pem && ln -fs /usr/share/ca-certificates/2864342.pem /etc/ssl/certs/2864342.pem"
	I1003 19:23:31.221236  444809 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2864342.pem
	I1003 19:23:31.236900  444809 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  3 18:34 /usr/share/ca-certificates/2864342.pem
	I1003 19:23:31.237003  444809 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2864342.pem
	I1003 19:23:31.331982  444809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2864342.pem /etc/ssl/certs/3ec20f2e.0"
	I1003 19:23:31.348417  444809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 19:23:31.360913  444809 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 19:23:31.370626  444809 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  3 18:27 /usr/share/ca-certificates/minikubeCA.pem
	I1003 19:23:31.370759  444809 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 19:23:31.439176  444809 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 19:23:31.451766  444809 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 19:23:31.458026  444809 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1003 19:23:31.525372  444809 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1003 19:23:31.590886  444809 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1003 19:23:31.653634  444809 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1003 19:23:31.718387  444809 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1003 19:23:31.766607  444809 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1003 19:23:31.814986  444809 kubeadm.go:400] StartCluster: {Name:pause-844729 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-844729 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 19:23:31.815138  444809 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1003 19:23:31.815227  444809 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1003 19:23:31.874623  444809 cri.go:89] found id: "b4edb0bc8b2e10ddd91a1f18e41714e9b020effe870b870ad1548c51abdd698a"
	I1003 19:23:31.874649  444809 cri.go:89] found id: "b7cace5722ba0dea6c3f841afbdc009c616b089fc76a656644b067a4f8e082ea"
	I1003 19:23:31.874655  444809 cri.go:89] found id: "da0cffc30d07d485c02b0ec61d8a9b3909ac227213b2060ee5749f2e4c309f14"
	I1003 19:23:31.874662  444809 cri.go:89] found id: "45e08f5b3c8750ac2fd35558a348abcfe4889f155ac6450a819fd64a7c7330b8"
	I1003 19:23:31.874666  444809 cri.go:89] found id: "6168e29def1182e29c0bf294c1c3d7237309f9f85b32e17a56b611beab0de0f3"
	I1003 19:23:31.874695  444809 cri.go:89] found id: "e76d5b298ebfdc13c2635e65d607a1504f98294c7e20d1bb64f2ce5a749224ef"
	I1003 19:23:31.874705  444809 cri.go:89] found id: "5bc9d928c66f715d2cb955773ff9a4ceeac2d33a54d32a1544eac9d3e61700fe"
	I1003 19:23:31.874709  444809 cri.go:89] found id: "84fa045c869f127f450bb8752bea5a8159645bcb9dc95bf2aa9c7f45b5311ca2"
	I1003 19:23:31.874712  444809 cri.go:89] found id: "5d124f9877dc3034ad8f48f78e4d24801d20c0a339bfef51da35d2994dbc8ecd"
	I1003 19:23:31.874720  444809 cri.go:89] found id: "857ea2e27fd5446162221b5717f5c41724882e4d6d67b73122cbadfde6751525"
	I1003 19:23:31.874724  444809 cri.go:89] found id: "0e24f3ce9f6cbd2fee0b930845a84383d871589f9e0d5410c93ebc0a1007c92f"
	I1003 19:23:31.874728  444809 cri.go:89] found id: "fd3fe7965793a71c3c6f9b9521b6b0c283e6b5ed6f1f5aee7fbfb482b5af6f32"
	I1003 19:23:31.874733  444809 cri.go:89] found id: "6f18ec5c83f04389f6cce9ba80e373f135129e84c9590239ca46414eb849a154"
	I1003 19:23:31.874743  444809 cri.go:89] found id: "fe077fc7b7398ab6a71e31a253a8c67d7227163b1d3d6d2ff769425cebd43420"
	I1003 19:23:31.874746  444809 cri.go:89] found id: ""
	I1003 19:23:31.874809  444809 ssh_runner.go:195] Run: sudo runc list -f json
	W1003 19:23:31.907330  444809 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-03T19:23:31Z" level=error msg="open /run/runc: no such file or directory"
	I1003 19:23:31.907481  444809 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 19:23:31.923289  444809 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1003 19:23:31.923313  444809 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1003 19:23:31.923404  444809 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1003 19:23:31.942557  444809 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1003 19:23:31.943259  444809 kubeconfig.go:125] found "pause-844729" server: "https://192.168.76.2:8443"
	I1003 19:23:31.944073  444809 kapi.go:59] client config for pause-844729: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21625-284583/.minikube/profiles/pause-844729/client.crt", KeyFile:"/home/jenkins/minikube-integration/21625-284583/.minikube/profiles/pause-844729/client.key", CAFile:"/home/jenkins/minikube-integration/21625-284583/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1003 19:23:31.944769  444809 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1003 19:23:31.944816  444809 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1003 19:23:31.944837  444809 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1003 19:23:31.944857  444809 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1003 19:23:31.944877  444809 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1003 19:23:31.945193  444809 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1003 19:23:31.969028  444809 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1003 19:23:31.969110  444809 kubeadm.go:601] duration metric: took 45.790048ms to restartPrimaryControlPlane
	I1003 19:23:31.969136  444809 kubeadm.go:402] duration metric: took 154.174035ms to StartCluster
	I1003 19:23:31.969169  444809 settings.go:142] acquiring lock: {Name:mkc95577dbc448e3409dfa2b5e53a3a1327cb451 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:23:31.969250  444809 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21625-284583/kubeconfig
	I1003 19:23:31.970167  444809 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/kubeconfig: {Name:mkc1323fd87f4a78231a26d2dab0dff7feecf1e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:23:31.970420  444809 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1003 19:23:31.970830  444809 config.go:182] Loaded profile config "pause-844729": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 19:23:31.970817  444809 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1003 19:23:31.977596  444809 out.go:179] * Enabled addons: 
	I1003 19:23:31.977681  444809 out.go:179] * Verifying Kubernetes components...
	I1003 19:23:31.980667  444809 addons.go:514] duration metric: took 9.832364ms for enable addons: enabled=[]
	I1003 19:23:31.980821  444809 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 19:23:32.535538  432533 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1003 19:23:32.535876  432533 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1003 19:23:32.535924  432533 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 19:23:32.535976  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 19:23:32.596368  432533 cri.go:89] found id: "04bedbd6c6d1a6948852e8a02d927b6d181e1cdd8b926cadb512e6c5a9e2bc18"
	I1003 19:23:32.596387  432533 cri.go:89] found id: ""
	I1003 19:23:32.596396  432533 logs.go:282] 1 containers: [04bedbd6c6d1a6948852e8a02d927b6d181e1cdd8b926cadb512e6c5a9e2bc18]
	I1003 19:23:32.596454  432533 ssh_runner.go:195] Run: which crictl
	I1003 19:23:32.600366  432533 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 19:23:32.600433  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 19:23:32.640549  432533 cri.go:89] found id: ""
	I1003 19:23:32.640572  432533 logs.go:282] 0 containers: []
	W1003 19:23:32.640581  432533 logs.go:284] No container was found matching "etcd"
	I1003 19:23:32.640588  432533 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 19:23:32.640648  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 19:23:32.690008  432533 cri.go:89] found id: ""
	I1003 19:23:32.690030  432533 logs.go:282] 0 containers: []
	W1003 19:23:32.690040  432533 logs.go:284] No container was found matching "coredns"
	I1003 19:23:32.690047  432533 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 19:23:32.690103  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 19:23:32.748194  432533 cri.go:89] found id: "dc5ccb1606b8053522f591bfece54cc0b244422b0fc5af82da81d2215cabb3a1"
	I1003 19:23:32.748212  432533 cri.go:89] found id: ""
	I1003 19:23:32.748227  432533 logs.go:282] 1 containers: [dc5ccb1606b8053522f591bfece54cc0b244422b0fc5af82da81d2215cabb3a1]
	I1003 19:23:32.748285  432533 ssh_runner.go:195] Run: which crictl
	I1003 19:23:32.752292  432533 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 19:23:32.752361  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 19:23:32.810878  432533 cri.go:89] found id: ""
	I1003 19:23:32.810900  432533 logs.go:282] 0 containers: []
	W1003 19:23:32.810908  432533 logs.go:284] No container was found matching "kube-proxy"
	I1003 19:23:32.810916  432533 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 19:23:32.810971  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 19:23:32.861223  432533 cri.go:89] found id: "c62c95f3827d9847bcb196dcef4991555aba77e95ba4e6a5900a14faf7679b30"
	I1003 19:23:32.861297  432533 cri.go:89] found id: ""
	I1003 19:23:32.861321  432533 logs.go:282] 1 containers: [c62c95f3827d9847bcb196dcef4991555aba77e95ba4e6a5900a14faf7679b30]
	I1003 19:23:32.861401  432533 ssh_runner.go:195] Run: which crictl
	I1003 19:23:32.869084  432533 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 19:23:32.869204  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 19:23:32.905409  432533 cri.go:89] found id: ""
	I1003 19:23:32.905485  432533 logs.go:282] 0 containers: []
	W1003 19:23:32.905510  432533 logs.go:284] No container was found matching "kindnet"
	I1003 19:23:32.905529  432533 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1003 19:23:32.905623  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1003 19:23:32.961642  432533 cri.go:89] found id: ""
	I1003 19:23:32.961719  432533 logs.go:282] 0 containers: []
	W1003 19:23:32.961743  432533 logs.go:284] No container was found matching "storage-provisioner"
	I1003 19:23:32.961766  432533 logs.go:123] Gathering logs for kubelet ...
	I1003 19:23:32.961810  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 19:23:33.124526  432533 logs.go:123] Gathering logs for dmesg ...
	I1003 19:23:33.124603  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 19:23:33.147563  432533 logs.go:123] Gathering logs for describe nodes ...
	I1003 19:23:33.147639  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 19:23:33.287865  432533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 19:23:33.287937  432533 logs.go:123] Gathering logs for kube-apiserver [04bedbd6c6d1a6948852e8a02d927b6d181e1cdd8b926cadb512e6c5a9e2bc18] ...
	I1003 19:23:33.287963  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 04bedbd6c6d1a6948852e8a02d927b6d181e1cdd8b926cadb512e6c5a9e2bc18"
	I1003 19:23:33.338747  432533 logs.go:123] Gathering logs for kube-scheduler [dc5ccb1606b8053522f591bfece54cc0b244422b0fc5af82da81d2215cabb3a1] ...
	I1003 19:23:33.338817  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dc5ccb1606b8053522f591bfece54cc0b244422b0fc5af82da81d2215cabb3a1"
	I1003 19:23:33.442636  432533 logs.go:123] Gathering logs for kube-controller-manager [c62c95f3827d9847bcb196dcef4991555aba77e95ba4e6a5900a14faf7679b30] ...
	I1003 19:23:33.442671  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c62c95f3827d9847bcb196dcef4991555aba77e95ba4e6a5900a14faf7679b30"
	I1003 19:23:33.509574  432533 logs.go:123] Gathering logs for CRI-O ...
	I1003 19:23:33.509642  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 19:23:33.585523  432533 logs.go:123] Gathering logs for container status ...
	I1003 19:23:33.585562  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 19:23:36.164770  432533 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1003 19:23:36.165140  432533 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1003 19:23:36.165188  432533 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 19:23:36.165246  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 19:23:36.203237  432533 cri.go:89] found id: "04bedbd6c6d1a6948852e8a02d927b6d181e1cdd8b926cadb512e6c5a9e2bc18"
	I1003 19:23:36.203262  432533 cri.go:89] found id: ""
	I1003 19:23:36.203271  432533 logs.go:282] 1 containers: [04bedbd6c6d1a6948852e8a02d927b6d181e1cdd8b926cadb512e6c5a9e2bc18]
	I1003 19:23:36.203333  432533 ssh_runner.go:195] Run: which crictl
	I1003 19:23:36.207123  432533 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 19:23:36.207196  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 19:23:36.243531  432533 cri.go:89] found id: ""
	I1003 19:23:36.243558  432533 logs.go:282] 0 containers: []
	W1003 19:23:36.243568  432533 logs.go:284] No container was found matching "etcd"
	I1003 19:23:36.243581  432533 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 19:23:36.243638  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 19:23:36.277373  432533 cri.go:89] found id: ""
	I1003 19:23:36.277400  432533 logs.go:282] 0 containers: []
	W1003 19:23:36.277408  432533 logs.go:284] No container was found matching "coredns"
	I1003 19:23:36.277415  432533 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 19:23:36.277473  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 19:23:32.242925  444809 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 19:23:32.265164  444809 node_ready.go:35] waiting up to 6m0s for node "pause-844729" to be "Ready" ...
	I1003 19:23:35.303837  444809 node_ready.go:49] node "pause-844729" is "Ready"
	I1003 19:23:35.303927  444809 node_ready.go:38] duration metric: took 3.038678835s for node "pause-844729" to be "Ready" ...
	I1003 19:23:35.303957  444809 api_server.go:52] waiting for apiserver process to appear ...
	I1003 19:23:35.304046  444809 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 19:23:35.322130  444809 api_server.go:72] duration metric: took 3.35164839s to wait for apiserver process to appear ...
	I1003 19:23:35.322155  444809 api_server.go:88] waiting for apiserver healthz status ...
	I1003 19:23:35.322175  444809 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1003 19:23:35.337698  444809 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1003 19:23:35.337777  444809 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1003 19:23:35.823097  444809 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1003 19:23:35.831611  444809 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1003 19:23:35.831639  444809 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1003 19:23:36.322859  444809 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1003 19:23:36.343739  444809 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1003 19:23:36.343772  444809 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1003 19:23:36.822273  444809 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1003 19:23:36.834964  444809 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1003 19:23:36.836328  444809 api_server.go:141] control plane version: v1.34.1
	I1003 19:23:36.836358  444809 api_server.go:131] duration metric: took 1.514196138s to wait for apiserver health ...
	I1003 19:23:36.836367  444809 system_pods.go:43] waiting for kube-system pods to appear ...
	I1003 19:23:36.841816  444809 system_pods.go:59] 7 kube-system pods found
	I1003 19:23:36.841847  444809 system_pods.go:61] "coredns-66bc5c9577-z7pwb" [427f1d63-2b09-401a-b2f3-2e2a8248c11e] Running
	I1003 19:23:36.841853  444809 system_pods.go:61] "etcd-pause-844729" [560bbe09-f7d4-4218-8305-948f601f4cd4] Running
	I1003 19:23:36.841858  444809 system_pods.go:61] "kindnet-qhksz" [0596aa14-3857-4ba6-a81c-11b8c29baf94] Running
	I1003 19:23:36.841867  444809 system_pods.go:61] "kube-apiserver-pause-844729" [5d812d91-c2b2-4922-95f0-5dd38088ba5c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1003 19:23:36.841875  444809 system_pods.go:61] "kube-controller-manager-pause-844729" [079ad09c-44cf-41f0-b521-df3c4901c134] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1003 19:23:36.841880  444809 system_pods.go:61] "kube-proxy-vxnlc" [b9fa1a51-79ed-470a-a56a-1d830b23760e] Running
	I1003 19:23:36.841887  444809 system_pods.go:61] "kube-scheduler-pause-844729" [bb2ac954-5d5b-4df7-8de8-e0687da43946] Running
	I1003 19:23:36.841892  444809 system_pods.go:74] duration metric: took 5.520177ms to wait for pod list to return data ...
	I1003 19:23:36.841900  444809 default_sa.go:34] waiting for default service account to be created ...
	I1003 19:23:36.845673  444809 default_sa.go:45] found service account: "default"
	I1003 19:23:36.845696  444809 default_sa.go:55] duration metric: took 3.789997ms for default service account to be created ...
	I1003 19:23:36.845706  444809 system_pods.go:116] waiting for k8s-apps to be running ...
	I1003 19:23:36.848641  444809 system_pods.go:86] 7 kube-system pods found
	I1003 19:23:36.848776  444809 system_pods.go:89] "coredns-66bc5c9577-z7pwb" [427f1d63-2b09-401a-b2f3-2e2a8248c11e] Running
	I1003 19:23:36.848817  444809 system_pods.go:89] "etcd-pause-844729" [560bbe09-f7d4-4218-8305-948f601f4cd4] Running
	I1003 19:23:36.848836  444809 system_pods.go:89] "kindnet-qhksz" [0596aa14-3857-4ba6-a81c-11b8c29baf94] Running
	I1003 19:23:36.848856  444809 system_pods.go:89] "kube-apiserver-pause-844729" [5d812d91-c2b2-4922-95f0-5dd38088ba5c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1003 19:23:36.848892  444809 system_pods.go:89] "kube-controller-manager-pause-844729" [079ad09c-44cf-41f0-b521-df3c4901c134] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1003 19:23:36.848920  444809 system_pods.go:89] "kube-proxy-vxnlc" [b9fa1a51-79ed-470a-a56a-1d830b23760e] Running
	I1003 19:23:36.848942  444809 system_pods.go:89] "kube-scheduler-pause-844729" [bb2ac954-5d5b-4df7-8de8-e0687da43946] Running
	I1003 19:23:36.848976  444809 system_pods.go:126] duration metric: took 3.263645ms to wait for k8s-apps to be running ...
	I1003 19:23:36.848998  444809 system_svc.go:44] waiting for kubelet service to be running ....
	I1003 19:23:36.849089  444809 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 19:23:36.863201  444809 system_svc.go:56] duration metric: took 14.192423ms WaitForService to wait for kubelet
	I1003 19:23:36.863281  444809 kubeadm.go:586] duration metric: took 4.892803815s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 19:23:36.863315  444809 node_conditions.go:102] verifying NodePressure condition ...
	I1003 19:23:36.866615  444809 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1003 19:23:36.866693  444809 node_conditions.go:123] node cpu capacity is 2
	I1003 19:23:36.866720  444809 node_conditions.go:105] duration metric: took 3.378905ms to run NodePressure ...
	I1003 19:23:36.866748  444809 start.go:241] waiting for startup goroutines ...
	I1003 19:23:36.866776  444809 start.go:246] waiting for cluster config update ...
	I1003 19:23:36.866808  444809 start.go:255] writing updated cluster config ...
	I1003 19:23:36.867197  444809 ssh_runner.go:195] Run: rm -f paused
	I1003 19:23:36.871182  444809 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1003 19:23:36.871836  444809 kapi.go:59] client config for pause-844729: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21625-284583/.minikube/profiles/pause-844729/client.crt", KeyFile:"/home/jenkins/minikube-integration/21625-284583/.minikube/profiles/pause-844729/client.key", CAFile:"/home/jenkins/minikube-integration/21625-284583/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120120), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1003 19:23:36.875274  444809 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-z7pwb" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:23:36.881940  444809 pod_ready.go:94] pod "coredns-66bc5c9577-z7pwb" is "Ready"
	I1003 19:23:36.881969  444809 pod_ready.go:86] duration metric: took 6.665681ms for pod "coredns-66bc5c9577-z7pwb" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:23:36.885925  444809 pod_ready.go:83] waiting for pod "etcd-pause-844729" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:23:36.891374  444809 pod_ready.go:94] pod "etcd-pause-844729" is "Ready"
	I1003 19:23:36.891403  444809 pod_ready.go:86] duration metric: took 5.451096ms for pod "etcd-pause-844729" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:23:36.894724  444809 pod_ready.go:83] waiting for pod "kube-apiserver-pause-844729" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:23:36.331428  432533 cri.go:89] found id: "dc5ccb1606b8053522f591bfece54cc0b244422b0fc5af82da81d2215cabb3a1"
	I1003 19:23:36.331452  432533 cri.go:89] found id: ""
	I1003 19:23:36.331471  432533 logs.go:282] 1 containers: [dc5ccb1606b8053522f591bfece54cc0b244422b0fc5af82da81d2215cabb3a1]
	I1003 19:23:36.331528  432533 ssh_runner.go:195] Run: which crictl
	I1003 19:23:36.335468  432533 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 19:23:36.335550  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 19:23:36.380900  432533 cri.go:89] found id: ""
	I1003 19:23:36.380937  432533 logs.go:282] 0 containers: []
	W1003 19:23:36.380946  432533 logs.go:284] No container was found matching "kube-proxy"
	I1003 19:23:36.380953  432533 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 19:23:36.381020  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 19:23:36.416759  432533 cri.go:89] found id: "c62c95f3827d9847bcb196dcef4991555aba77e95ba4e6a5900a14faf7679b30"
	I1003 19:23:36.416791  432533 cri.go:89] found id: ""
	I1003 19:23:36.416803  432533 logs.go:282] 1 containers: [c62c95f3827d9847bcb196dcef4991555aba77e95ba4e6a5900a14faf7679b30]
	I1003 19:23:36.416870  432533 ssh_runner.go:195] Run: which crictl
	I1003 19:23:36.421209  432533 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 19:23:36.421300  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 19:23:36.452859  432533 cri.go:89] found id: ""
	I1003 19:23:36.452898  432533 logs.go:282] 0 containers: []
	W1003 19:23:36.452907  432533 logs.go:284] No container was found matching "kindnet"
	I1003 19:23:36.452913  432533 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1003 19:23:36.452979  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1003 19:23:36.482928  432533 cri.go:89] found id: ""
	I1003 19:23:36.482972  432533 logs.go:282] 0 containers: []
	W1003 19:23:36.482981  432533 logs.go:284] No container was found matching "storage-provisioner"
	I1003 19:23:36.482991  432533 logs.go:123] Gathering logs for kube-scheduler [dc5ccb1606b8053522f591bfece54cc0b244422b0fc5af82da81d2215cabb3a1] ...
	I1003 19:23:36.483004  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dc5ccb1606b8053522f591bfece54cc0b244422b0fc5af82da81d2215cabb3a1"
	I1003 19:23:36.552196  432533 logs.go:123] Gathering logs for kube-controller-manager [c62c95f3827d9847bcb196dcef4991555aba77e95ba4e6a5900a14faf7679b30] ...
	I1003 19:23:36.552235  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c62c95f3827d9847bcb196dcef4991555aba77e95ba4e6a5900a14faf7679b30"
	I1003 19:23:36.580885  432533 logs.go:123] Gathering logs for CRI-O ...
	I1003 19:23:36.580910  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 19:23:36.643579  432533 logs.go:123] Gathering logs for container status ...
	I1003 19:23:36.643654  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 19:23:36.689003  432533 logs.go:123] Gathering logs for kubelet ...
	I1003 19:23:36.689026  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 19:23:36.821480  432533 logs.go:123] Gathering logs for dmesg ...
	I1003 19:23:36.821516  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 19:23:36.839553  432533 logs.go:123] Gathering logs for describe nodes ...
	I1003 19:23:36.839585  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 19:23:36.944938  432533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 19:23:36.944973  432533 logs.go:123] Gathering logs for kube-apiserver [04bedbd6c6d1a6948852e8a02d927b6d181e1cdd8b926cadb512e6c5a9e2bc18] ...
	I1003 19:23:36.944989  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 04bedbd6c6d1a6948852e8a02d927b6d181e1cdd8b926cadb512e6c5a9e2bc18"
	I1003 19:23:39.478300  432533 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1003 19:23:39.478732  432533 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1003 19:23:39.478782  432533 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 19:23:39.478835  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 19:23:39.509516  432533 cri.go:89] found id: "04bedbd6c6d1a6948852e8a02d927b6d181e1cdd8b926cadb512e6c5a9e2bc18"
	I1003 19:23:39.509548  432533 cri.go:89] found id: ""
	I1003 19:23:39.509557  432533 logs.go:282] 1 containers: [04bedbd6c6d1a6948852e8a02d927b6d181e1cdd8b926cadb512e6c5a9e2bc18]
	I1003 19:23:39.509614  432533 ssh_runner.go:195] Run: which crictl
	I1003 19:23:39.513425  432533 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 19:23:39.513495  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 19:23:39.539280  432533 cri.go:89] found id: ""
	I1003 19:23:39.539303  432533 logs.go:282] 0 containers: []
	W1003 19:23:39.539311  432533 logs.go:284] No container was found matching "etcd"
	I1003 19:23:39.539318  432533 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 19:23:39.539418  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 19:23:39.565413  432533 cri.go:89] found id: ""
	I1003 19:23:39.565435  432533 logs.go:282] 0 containers: []
	W1003 19:23:39.565443  432533 logs.go:284] No container was found matching "coredns"
	I1003 19:23:39.565449  432533 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 19:23:39.565506  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 19:23:39.590330  432533 cri.go:89] found id: "dc5ccb1606b8053522f591bfece54cc0b244422b0fc5af82da81d2215cabb3a1"
	I1003 19:23:39.590353  432533 cri.go:89] found id: ""
	I1003 19:23:39.590362  432533 logs.go:282] 1 containers: [dc5ccb1606b8053522f591bfece54cc0b244422b0fc5af82da81d2215cabb3a1]
	I1003 19:23:39.590437  432533 ssh_runner.go:195] Run: which crictl
	I1003 19:23:39.593967  432533 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 19:23:39.594076  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 19:23:39.623196  432533 cri.go:89] found id: ""
	I1003 19:23:39.623227  432533 logs.go:282] 0 containers: []
	W1003 19:23:39.623237  432533 logs.go:284] No container was found matching "kube-proxy"
	I1003 19:23:39.623243  432533 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 19:23:39.623307  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 19:23:39.651035  432533 cri.go:89] found id: "c62c95f3827d9847bcb196dcef4991555aba77e95ba4e6a5900a14faf7679b30"
	I1003 19:23:39.651065  432533 cri.go:89] found id: ""
	I1003 19:23:39.651074  432533 logs.go:282] 1 containers: [c62c95f3827d9847bcb196dcef4991555aba77e95ba4e6a5900a14faf7679b30]
	I1003 19:23:39.651136  432533 ssh_runner.go:195] Run: which crictl
	I1003 19:23:39.654626  432533 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 19:23:39.654694  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 19:23:39.683461  432533 cri.go:89] found id: ""
	I1003 19:23:39.683534  432533 logs.go:282] 0 containers: []
	W1003 19:23:39.683557  432533 logs.go:284] No container was found matching "kindnet"
	I1003 19:23:39.683577  432533 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1003 19:23:39.683663  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1003 19:23:39.715039  432533 cri.go:89] found id: ""
	I1003 19:23:39.715064  432533 logs.go:282] 0 containers: []
	W1003 19:23:39.715072  432533 logs.go:284] No container was found matching "storage-provisioner"
	I1003 19:23:39.715082  432533 logs.go:123] Gathering logs for container status ...
	I1003 19:23:39.715093  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 19:23:39.743331  432533 logs.go:123] Gathering logs for kubelet ...
	I1003 19:23:39.743362  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 19:23:39.855453  432533 logs.go:123] Gathering logs for dmesg ...
	I1003 19:23:39.855490  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 19:23:39.872491  432533 logs.go:123] Gathering logs for describe nodes ...
	I1003 19:23:39.872521  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 19:23:39.939187  432533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 19:23:39.939207  432533 logs.go:123] Gathering logs for kube-apiserver [04bedbd6c6d1a6948852e8a02d927b6d181e1cdd8b926cadb512e6c5a9e2bc18] ...
	I1003 19:23:39.939221  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 04bedbd6c6d1a6948852e8a02d927b6d181e1cdd8b926cadb512e6c5a9e2bc18"
	I1003 19:23:39.984125  432533 logs.go:123] Gathering logs for kube-scheduler [dc5ccb1606b8053522f591bfece54cc0b244422b0fc5af82da81d2215cabb3a1] ...
	I1003 19:23:39.984159  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dc5ccb1606b8053522f591bfece54cc0b244422b0fc5af82da81d2215cabb3a1"
	I1003 19:23:40.057150  432533 logs.go:123] Gathering logs for kube-controller-manager [c62c95f3827d9847bcb196dcef4991555aba77e95ba4e6a5900a14faf7679b30] ...
	I1003 19:23:40.057186  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c62c95f3827d9847bcb196dcef4991555aba77e95ba4e6a5900a14faf7679b30"
	I1003 19:23:40.085486  432533 logs.go:123] Gathering logs for CRI-O ...
	I1003 19:23:40.085518  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1003 19:23:38.899729  444809 pod_ready.go:104] pod "kube-apiserver-pause-844729" is not "Ready", error: <nil>
	W1003 19:23:40.900924  444809 pod_ready.go:104] pod "kube-apiserver-pause-844729" is not "Ready", error: <nil>
	I1003 19:23:42.647553  432533 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1003 19:23:42.647994  432533 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1003 19:23:42.648041  432533 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 19:23:42.648103  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 19:23:42.680331  432533 cri.go:89] found id: "04bedbd6c6d1a6948852e8a02d927b6d181e1cdd8b926cadb512e6c5a9e2bc18"
	I1003 19:23:42.680355  432533 cri.go:89] found id: ""
	I1003 19:23:42.680363  432533 logs.go:282] 1 containers: [04bedbd6c6d1a6948852e8a02d927b6d181e1cdd8b926cadb512e6c5a9e2bc18]
	I1003 19:23:42.680419  432533 ssh_runner.go:195] Run: which crictl
	I1003 19:23:42.683970  432533 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 19:23:42.684080  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 19:23:42.711803  432533 cri.go:89] found id: ""
	I1003 19:23:42.711838  432533 logs.go:282] 0 containers: []
	W1003 19:23:42.711847  432533 logs.go:284] No container was found matching "etcd"
	I1003 19:23:42.711869  432533 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 19:23:42.711967  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 19:23:42.738657  432533 cri.go:89] found id: ""
	I1003 19:23:42.738698  432533 logs.go:282] 0 containers: []
	W1003 19:23:42.738707  432533 logs.go:284] No container was found matching "coredns"
	I1003 19:23:42.738713  432533 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 19:23:42.738804  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 19:23:42.767317  432533 cri.go:89] found id: "dc5ccb1606b8053522f591bfece54cc0b244422b0fc5af82da81d2215cabb3a1"
	I1003 19:23:42.767338  432533 cri.go:89] found id: ""
	I1003 19:23:42.767347  432533 logs.go:282] 1 containers: [dc5ccb1606b8053522f591bfece54cc0b244422b0fc5af82da81d2215cabb3a1]
	I1003 19:23:42.767404  432533 ssh_runner.go:195] Run: which crictl
	I1003 19:23:42.771217  432533 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 19:23:42.771284  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 19:23:42.798315  432533 cri.go:89] found id: ""
	I1003 19:23:42.798362  432533 logs.go:282] 0 containers: []
	W1003 19:23:42.798388  432533 logs.go:284] No container was found matching "kube-proxy"
	I1003 19:23:42.798398  432533 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 19:23:42.798484  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 19:23:42.825681  432533 cri.go:89] found id: "c62c95f3827d9847bcb196dcef4991555aba77e95ba4e6a5900a14faf7679b30"
	I1003 19:23:42.825705  432533 cri.go:89] found id: ""
	I1003 19:23:42.825714  432533 logs.go:282] 1 containers: [c62c95f3827d9847bcb196dcef4991555aba77e95ba4e6a5900a14faf7679b30]
	I1003 19:23:42.825790  432533 ssh_runner.go:195] Run: which crictl
	I1003 19:23:42.829471  432533 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 19:23:42.829572  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 19:23:42.856918  432533 cri.go:89] found id: ""
	I1003 19:23:42.856949  432533 logs.go:282] 0 containers: []
	W1003 19:23:42.856959  432533 logs.go:284] No container was found matching "kindnet"
	I1003 19:23:42.856965  432533 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1003 19:23:42.857026  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1003 19:23:42.883631  432533 cri.go:89] found id: ""
	I1003 19:23:42.883653  432533 logs.go:282] 0 containers: []
	W1003 19:23:42.883661  432533 logs.go:284] No container was found matching "storage-provisioner"
	I1003 19:23:42.883670  432533 logs.go:123] Gathering logs for dmesg ...
	I1003 19:23:42.883681  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1003 19:23:42.904093  432533 logs.go:123] Gathering logs for describe nodes ...
	I1003 19:23:42.904161  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1003 19:23:42.974734  432533 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1003 19:23:42.974761  432533 logs.go:123] Gathering logs for kube-apiserver [04bedbd6c6d1a6948852e8a02d927b6d181e1cdd8b926cadb512e6c5a9e2bc18] ...
	I1003 19:23:42.974775  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 04bedbd6c6d1a6948852e8a02d927b6d181e1cdd8b926cadb512e6c5a9e2bc18"
	I1003 19:23:43.014144  432533 logs.go:123] Gathering logs for kube-scheduler [dc5ccb1606b8053522f591bfece54cc0b244422b0fc5af82da81d2215cabb3a1] ...
	I1003 19:23:43.014184  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dc5ccb1606b8053522f591bfece54cc0b244422b0fc5af82da81d2215cabb3a1"
	I1003 19:23:43.080172  432533 logs.go:123] Gathering logs for kube-controller-manager [c62c95f3827d9847bcb196dcef4991555aba77e95ba4e6a5900a14faf7679b30] ...
	I1003 19:23:43.080203  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c62c95f3827d9847bcb196dcef4991555aba77e95ba4e6a5900a14faf7679b30"
	I1003 19:23:43.140202  432533 logs.go:123] Gathering logs for CRI-O ...
	I1003 19:23:43.140229  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 19:23:43.201495  432533 logs.go:123] Gathering logs for container status ...
	I1003 19:23:43.201531  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1003 19:23:43.232539  432533 logs.go:123] Gathering logs for kubelet ...
	I1003 19:23:43.232566  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1003 19:23:45.859028  432533 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1003 19:23:45.859459  432533 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1003 19:23:45.859503  432533 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1003 19:23:45.859561  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1003 19:23:45.892662  432533 cri.go:89] found id: "04bedbd6c6d1a6948852e8a02d927b6d181e1cdd8b926cadb512e6c5a9e2bc18"
	I1003 19:23:45.892681  432533 cri.go:89] found id: ""
	I1003 19:23:45.892689  432533 logs.go:282] 1 containers: [04bedbd6c6d1a6948852e8a02d927b6d181e1cdd8b926cadb512e6c5a9e2bc18]
	I1003 19:23:45.892779  432533 ssh_runner.go:195] Run: which crictl
	I1003 19:23:45.898188  432533 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1003 19:23:45.898258  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1003 19:23:45.932330  432533 cri.go:89] found id: ""
	I1003 19:23:45.932353  432533 logs.go:282] 0 containers: []
	W1003 19:23:45.932362  432533 logs.go:284] No container was found matching "etcd"
	I1003 19:23:45.932368  432533 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1003 19:23:45.932430  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1003 19:23:45.962429  432533 cri.go:89] found id: ""
	I1003 19:23:45.962452  432533 logs.go:282] 0 containers: []
	W1003 19:23:45.962460  432533 logs.go:284] No container was found matching "coredns"
	I1003 19:23:45.962466  432533 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1003 19:23:45.962524  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1003 19:23:45.989699  432533 cri.go:89] found id: "dc5ccb1606b8053522f591bfece54cc0b244422b0fc5af82da81d2215cabb3a1"
	I1003 19:23:45.989722  432533 cri.go:89] found id: ""
	I1003 19:23:45.989732  432533 logs.go:282] 1 containers: [dc5ccb1606b8053522f591bfece54cc0b244422b0fc5af82da81d2215cabb3a1]
	I1003 19:23:45.989793  432533 ssh_runner.go:195] Run: which crictl
	I1003 19:23:45.993524  432533 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1003 19:23:45.993593  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1003 19:23:46.020602  432533 cri.go:89] found id: ""
	I1003 19:23:46.020631  432533 logs.go:282] 0 containers: []
	W1003 19:23:46.020640  432533 logs.go:284] No container was found matching "kube-proxy"
	I1003 19:23:46.020647  432533 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1003 19:23:46.020710  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1003 19:23:46.048002  432533 cri.go:89] found id: "c62c95f3827d9847bcb196dcef4991555aba77e95ba4e6a5900a14faf7679b30"
	I1003 19:23:46.048026  432533 cri.go:89] found id: ""
	I1003 19:23:46.048034  432533 logs.go:282] 1 containers: [c62c95f3827d9847bcb196dcef4991555aba77e95ba4e6a5900a14faf7679b30]
	I1003 19:23:46.048091  432533 ssh_runner.go:195] Run: which crictl
	I1003 19:23:46.051883  432533 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1003 19:23:46.051966  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1003 19:23:46.081600  432533 cri.go:89] found id: ""
	I1003 19:23:46.081626  432533 logs.go:282] 0 containers: []
	W1003 19:23:46.081635  432533 logs.go:284] No container was found matching "kindnet"
	I1003 19:23:46.081642  432533 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1003 19:23:46.081706  432533 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1003 19:23:46.135656  432533 cri.go:89] found id: ""
	I1003 19:23:46.135685  432533 logs.go:282] 0 containers: []
	W1003 19:23:46.135694  432533 logs.go:284] No container was found matching "storage-provisioner"
	I1003 19:23:46.135704  432533 logs.go:123] Gathering logs for kube-controller-manager [c62c95f3827d9847bcb196dcef4991555aba77e95ba4e6a5900a14faf7679b30] ...
	I1003 19:23:46.135716  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c62c95f3827d9847bcb196dcef4991555aba77e95ba4e6a5900a14faf7679b30"
	I1003 19:23:46.200303  432533 logs.go:123] Gathering logs for CRI-O ...
	I1003 19:23:46.200331  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1003 19:23:46.275263  432533 logs.go:123] Gathering logs for container status ...
	I1003 19:23:46.275349  432533 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1003 19:23:42.901251  444809 pod_ready.go:104] pod "kube-apiserver-pause-844729" is not "Ready", error: <nil>
	W1003 19:23:45.401779  444809 pod_ready.go:104] pod "kube-apiserver-pause-844729" is not "Ready", error: <nil>
	I1003 19:23:46.404201  444809 pod_ready.go:94] pod "kube-apiserver-pause-844729" is "Ready"
	I1003 19:23:46.404224  444809 pod_ready.go:86] duration metric: took 9.509472841s for pod "kube-apiserver-pause-844729" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:23:46.411223  444809 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-844729" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:23:46.422418  444809 pod_ready.go:94] pod "kube-controller-manager-pause-844729" is "Ready"
	I1003 19:23:46.422441  444809 pod_ready.go:86] duration metric: took 11.195284ms for pod "kube-controller-manager-pause-844729" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:23:46.426015  444809 pod_ready.go:83] waiting for pod "kube-proxy-vxnlc" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:23:46.436115  444809 pod_ready.go:94] pod "kube-proxy-vxnlc" is "Ready"
	I1003 19:23:46.436136  444809 pod_ready.go:86] duration metric: took 10.104098ms for pod "kube-proxy-vxnlc" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:23:46.443413  444809 pod_ready.go:83] waiting for pod "kube-scheduler-pause-844729" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:23:46.597512  444809 pod_ready.go:94] pod "kube-scheduler-pause-844729" is "Ready"
	I1003 19:23:46.597536  444809 pod_ready.go:86] duration metric: took 154.105274ms for pod "kube-scheduler-pause-844729" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:23:46.597547  444809 pod_ready.go:40] duration metric: took 9.726334779s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1003 19:23:46.667748  444809 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1003 19:23:46.670958  444809 out.go:179] * Done! kubectl is now configured to use "pause-844729" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 03 19:23:31 pause-844729 crio[2057]: time="2025-10-03T19:23:31.262941064Z" level=info msg="Created container 6168e29def1182e29c0bf294c1c3d7237309f9f85b32e17a56b611beab0de0f3: kube-system/kube-proxy-vxnlc/kube-proxy" id=5913cde7-fc45-475e-9031-3a820599154b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:23:31 pause-844729 crio[2057]: time="2025-10-03T19:23:31.264032833Z" level=info msg="Starting container: 6168e29def1182e29c0bf294c1c3d7237309f9f85b32e17a56b611beab0de0f3" id=e20c315e-f72f-4306-b4d1-bcc9cde9ceaa name=/runtime.v1.RuntimeService/StartContainer
	Oct 03 19:23:31 pause-844729 crio[2057]: time="2025-10-03T19:23:31.271441722Z" level=info msg="Started container" PID=2290 containerID=45e08f5b3c8750ac2fd35558a348abcfe4889f155ac6450a819fd64a7c7330b8 description=kube-system/coredns-66bc5c9577-z7pwb/coredns id=f07528cc-9c18-4fc2-a250-7063dc0a4f2d name=/runtime.v1.RuntimeService/StartContainer sandboxID=5cc933b35d332d0d876c4eb2f62af8e09a9143f4bde00dbd2713f86227e431c5
	Oct 03 19:23:31 pause-844729 crio[2057]: time="2025-10-03T19:23:31.27372286Z" level=info msg="Created container da0cffc30d07d485c02b0ec61d8a9b3909ac227213b2060ee5749f2e4c309f14: kube-system/kube-scheduler-pause-844729/kube-scheduler" id=18bc5083-fe63-49bc-9260-109a8fa181a9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:23:31 pause-844729 crio[2057]: time="2025-10-03T19:23:31.294021135Z" level=info msg="Started container" PID=2282 containerID=6168e29def1182e29c0bf294c1c3d7237309f9f85b32e17a56b611beab0de0f3 description=kube-system/kube-proxy-vxnlc/kube-proxy id=e20c315e-f72f-4306-b4d1-bcc9cde9ceaa name=/runtime.v1.RuntimeService/StartContainer sandboxID=644fa1917083fbc943674808dbbdd1d251a3fd88a79fef84039bf218ca1695b8
	Oct 03 19:23:31 pause-844729 crio[2057]: time="2025-10-03T19:23:31.294933142Z" level=info msg="Starting container: da0cffc30d07d485c02b0ec61d8a9b3909ac227213b2060ee5749f2e4c309f14" id=442e6b27-7dae-46b5-9880-98794fa09c69 name=/runtime.v1.RuntimeService/StartContainer
	Oct 03 19:23:31 pause-844729 crio[2057]: time="2025-10-03T19:23:31.30118577Z" level=info msg="Started container" PID=2288 containerID=da0cffc30d07d485c02b0ec61d8a9b3909ac227213b2060ee5749f2e4c309f14 description=kube-system/kube-scheduler-pause-844729/kube-scheduler id=442e6b27-7dae-46b5-9880-98794fa09c69 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ca7502bb97a300a30f11e885c6606dfcc69ed119dd79ec50f2e178e7692fd36a
	Oct 03 19:23:31 pause-844729 crio[2057]: time="2025-10-03T19:23:31.301943083Z" level=info msg="Created container b7cace5722ba0dea6c3f841afbdc009c616b089fc76a656644b067a4f8e082ea: kube-system/etcd-pause-844729/etcd" id=c2779ed4-9929-4231-b1fa-eab9f2ec0481 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:23:31 pause-844729 crio[2057]: time="2025-10-03T19:23:31.302220142Z" level=info msg="Created container b4edb0bc8b2e10ddd91a1f18e41714e9b020effe870b870ad1548c51abdd698a: kube-system/kube-apiserver-pause-844729/kube-apiserver" id=3c7277d9-edb0-41f2-8505-59a27e323354 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:23:31 pause-844729 crio[2057]: time="2025-10-03T19:23:31.30335201Z" level=info msg="Starting container: b4edb0bc8b2e10ddd91a1f18e41714e9b020effe870b870ad1548c51abdd698a" id=e10fff39-d34b-4d16-b490-42ed334bb5a0 name=/runtime.v1.RuntimeService/StartContainer
	Oct 03 19:23:31 pause-844729 crio[2057]: time="2025-10-03T19:23:31.303530483Z" level=info msg="Starting container: b7cace5722ba0dea6c3f841afbdc009c616b089fc76a656644b067a4f8e082ea" id=9a4e8af7-6a4b-4f8b-bf57-ab37f1a2d971 name=/runtime.v1.RuntimeService/StartContainer
	Oct 03 19:23:31 pause-844729 crio[2057]: time="2025-10-03T19:23:31.324040248Z" level=info msg="Started container" PID=2315 containerID=b4edb0bc8b2e10ddd91a1f18e41714e9b020effe870b870ad1548c51abdd698a description=kube-system/kube-apiserver-pause-844729/kube-apiserver id=e10fff39-d34b-4d16-b490-42ed334bb5a0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0ddc7b2b69ef2ab3ec3a577469acdd1b7acef0085201a5fcde402b8d38aa7a50
	Oct 03 19:23:31 pause-844729 crio[2057]: time="2025-10-03T19:23:31.324319145Z" level=info msg="Started container" PID=2317 containerID=b7cace5722ba0dea6c3f841afbdc009c616b089fc76a656644b067a4f8e082ea description=kube-system/etcd-pause-844729/etcd id=9a4e8af7-6a4b-4f8b-bf57-ab37f1a2d971 name=/runtime.v1.RuntimeService/StartContainer sandboxID=efa997eec887f8dc5f8eefa59d472018f4b4caf06d80c163980f4f5e0a747155
	Oct 03 19:23:41 pause-844729 crio[2057]: time="2025-10-03T19:23:41.398406412Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 03 19:23:41 pause-844729 crio[2057]: time="2025-10-03T19:23:41.403312063Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 03 19:23:41 pause-844729 crio[2057]: time="2025-10-03T19:23:41.403476152Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 03 19:23:41 pause-844729 crio[2057]: time="2025-10-03T19:23:41.403555645Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 03 19:23:41 pause-844729 crio[2057]: time="2025-10-03T19:23:41.406955095Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 03 19:23:41 pause-844729 crio[2057]: time="2025-10-03T19:23:41.406988671Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 03 19:23:41 pause-844729 crio[2057]: time="2025-10-03T19:23:41.407011293Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 03 19:23:41 pause-844729 crio[2057]: time="2025-10-03T19:23:41.410151745Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 03 19:23:41 pause-844729 crio[2057]: time="2025-10-03T19:23:41.410185263Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 03 19:23:41 pause-844729 crio[2057]: time="2025-10-03T19:23:41.410207015Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 03 19:23:41 pause-844729 crio[2057]: time="2025-10-03T19:23:41.413261262Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 03 19:23:41 pause-844729 crio[2057]: time="2025-10-03T19:23:41.413295175Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	b4edb0bc8b2e1       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   20 seconds ago       Running             kube-apiserver            1                   0ddc7b2b69ef2       kube-apiserver-pause-844729            kube-system
	b7cace5722ba0       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   20 seconds ago       Running             etcd                      1                   efa997eec887f       etcd-pause-844729                      kube-system
	da0cffc30d07d       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   20 seconds ago       Running             kube-scheduler            1                   ca7502bb97a30       kube-scheduler-pause-844729            kube-system
	45e08f5b3c875       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   20 seconds ago       Running             coredns                   1                   5cc933b35d332       coredns-66bc5c9577-z7pwb               kube-system
	6168e29def118       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   20 seconds ago       Running             kube-proxy                1                   644fa1917083f       kube-proxy-vxnlc                       kube-system
	e76d5b298ebfd       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   20 seconds ago       Running             kindnet-cni               1                   0315418422ebd       kindnet-qhksz                          kube-system
	5bc9d928c66f7       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   20 seconds ago       Running             kube-controller-manager   1                   e33cde34f7d3d       kube-controller-manager-pause-844729   kube-system
	84fa045c869f1       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   32 seconds ago       Exited              coredns                   0                   5cc933b35d332       coredns-66bc5c9577-z7pwb               kube-system
	5d124f9877dc3       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   0315418422ebd       kindnet-qhksz                          kube-system
	857ea2e27fd54       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   644fa1917083f       kube-proxy-vxnlc                       kube-system
	0e24f3ce9f6cb       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   efa997eec887f       etcd-pause-844729                      kube-system
	fd3fe7965793a       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   0ddc7b2b69ef2       kube-apiserver-pause-844729            kube-system
	6f18ec5c83f04       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   ca7502bb97a30       kube-scheduler-pause-844729            kube-system
	fe077fc7b7398       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   e33cde34f7d3d       kube-controller-manager-pause-844729   kube-system
	
	
	==> coredns [45e08f5b3c8750ac2fd35558a348abcfe4889f155ac6450a819fd64a7c7330b8] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40072 - 43023 "HINFO IN 8662707546940497813.8488772312819346048. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012124736s
	
	
	==> coredns [84fa045c869f127f450bb8752bea5a8159645bcb9dc95bf2aa9c7f45b5311ca2] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53695 - 65056 "HINFO IN 7853163631885255782.674258907920431086. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.012733427s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-844729
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-844729
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a43873c79fc22f8b1ccd29d3dfa635d392b09335
	                    minikube.k8s.io/name=pause-844729
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_03T19_22_32_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 03 Oct 2025 19:22:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-844729
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 03 Oct 2025 19:23:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 03 Oct 2025 19:23:45 +0000   Fri, 03 Oct 2025 19:22:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 03 Oct 2025 19:23:45 +0000   Fri, 03 Oct 2025 19:22:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 03 Oct 2025 19:23:45 +0000   Fri, 03 Oct 2025 19:22:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 03 Oct 2025 19:23:45 +0000   Fri, 03 Oct 2025 19:23:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-844729
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 02d1e61509b2434098162c1013a2ff8e
	  System UUID:                0531cd00-e7b4-4767-9f36-05e850ecbd5e
	  Boot ID:                    3762136e-8bec-4104-a5cb-0b1976f6048e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-z7pwb                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     74s
	  kube-system                 etcd-pause-844729                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         79s
	  kube-system                 kindnet-qhksz                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      75s
	  kube-system                 kube-apiserver-pause-844729             250m (12%)    0 (0%)      0 (0%)           0 (0%)         79s
	  kube-system                 kube-controller-manager-pause-844729    200m (10%)    0 (0%)      0 (0%)           0 (0%)         79s
	  kube-system                 kube-proxy-vxnlc                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         75s
	  kube-system                 kube-scheduler-pause-844729             100m (5%)     0 (0%)      0 (0%)           0 (0%)         79s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 73s                kube-proxy       
	  Normal   Starting                 15s                kube-proxy       
	  Warning  CgroupV1                 88s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  88s (x8 over 88s)  kubelet          Node pause-844729 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    88s (x8 over 88s)  kubelet          Node pause-844729 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     88s (x8 over 88s)  kubelet          Node pause-844729 status is now: NodeHasSufficientPID
	  Normal   Starting                 80s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 80s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  80s                kubelet          Node pause-844729 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    80s                kubelet          Node pause-844729 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     80s                kubelet          Node pause-844729 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           76s                node-controller  Node pause-844729 event: Registered Node pause-844729 in Controller
	  Normal   NodeReady                33s                kubelet          Node pause-844729 status is now: NodeReady
	  Normal   RegisteredNode           13s                node-controller  Node pause-844729 event: Registered Node pause-844729 in Controller
	
	
	==> dmesg <==
	[Oct 3 18:56] overlayfs: idmapped layers are currently not supported
	[  +3.564365] overlayfs: idmapped layers are currently not supported
	[Oct 3 18:58] overlayfs: idmapped layers are currently not supported
	[Oct 3 18:59] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:00] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:05] overlayfs: idmapped layers are currently not supported
	[ +33.149550] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:07] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:08] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:09] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:10] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:11] overlayfs: idmapped layers are currently not supported
	[  +4.287643] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:12] overlayfs: idmapped layers are currently not supported
	[ +24.839009] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:13] overlayfs: idmapped layers are currently not supported
	[ +26.493253] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:15] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:16] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:17] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000010] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Oct 3 19:18] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:20] overlayfs: idmapped layers are currently not supported
	[ +32.018892] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:22] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [0e24f3ce9f6cbd2fee0b930845a84383d871589f9e0d5410c93ebc0a1007c92f] <==
	{"level":"warn","ts":"2025-10-03T19:22:27.211672Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:22:27.241236Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:22:27.304946Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:22:27.330681Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:22:27.354034Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:22:27.418130Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:22:27.544841Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41924","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-03T19:23:23.301959Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-03T19:23:23.302010Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-844729","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"error","ts":"2025-10-03T19:23:23.302096Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-03T19:23:23.447356Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-03T19:23:23.447497Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2025-10-03T19:23:23.447708Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-03T19:23:23.447756Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"error","ts":"2025-10-03T19:23:23.447269Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"warn","ts":"2025-10-03T19:23:23.448113Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-03T19:23:23.448246Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-03T19:23:23.448280Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-03T19:23:23.448369Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-03T19:23:23.448428Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-03T19:23:23.448462Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-03T19:23:23.451296Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"error","ts":"2025-10-03T19:23:23.451426Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-03T19:23:23.451500Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-03T19:23:23.451535Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-844729","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> etcd [b7cace5722ba0dea6c3f841afbdc009c616b089fc76a656644b067a4f8e082ea] <==
	{"level":"warn","ts":"2025-10-03T19:23:33.766283Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:23:33.793003Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:23:33.822940Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:23:33.853216Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:23:33.882833Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:23:33.914995Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:23:33.944835Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:23:33.983369Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:23:33.990703Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:23:34.019393Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:23:34.062052Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:23:34.099320Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:23:34.131068Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:23:34.157325Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:23:34.201341Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:23:34.225216Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:23:34.257279Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:23:34.269491Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:23:34.292063Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:23:34.310935Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:23:34.327274Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:23:34.361847Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:23:34.385955Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:23:34.414948Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:23:34.496349Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50826","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 19:23:51 up  2:06,  0 user,  load average: 3.02, 3.26, 2.59
	Linux pause-844729 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5d124f9877dc3034ad8f48f78e4d24801d20c0a339bfef51da35d2994dbc8ecd] <==
	I1003 19:22:38.194205       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1003 19:22:38.194455       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1003 19:22:38.194595       1 main.go:148] setting mtu 1500 for CNI 
	I1003 19:22:38.194614       1 main.go:178] kindnetd IP family: "ipv4"
	I1003 19:22:38.194624       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-03T19:22:38Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1003 19:22:38.395011       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1003 19:22:38.395037       1 controller.go:381] "Waiting for informer caches to sync"
	I1003 19:22:38.395047       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1003 19:22:38.395330       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1003 19:23:08.395141       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1003 19:23:08.395141       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1003 19:23:08.395372       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1003 19:23:08.396409       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1003 19:23:09.995523       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1003 19:23:09.995637       1 metrics.go:72] Registering metrics
	I1003 19:23:09.995732       1 controller.go:711] "Syncing nftables rules"
	I1003 19:23:18.395564       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1003 19:23:18.395625       1 main.go:301] handling current node
	
	
	==> kindnet [e76d5b298ebfdc13c2635e65d607a1504f98294c7e20d1bb64f2ce5a749224ef] <==
	I1003 19:23:31.196289       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1003 19:23:31.196435       1 main.go:148] setting mtu 1500 for CNI 
	I1003 19:23:31.196448       1 main.go:178] kindnetd IP family: "ipv4"
	I1003 19:23:31.196462       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-03T19:23:31Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	E1003 19:23:31.421053       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1003 19:23:31.421459       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1003 19:23:31.421471       1 controller.go:381] "Waiting for informer caches to sync"
	I1003 19:23:31.421485       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1003 19:23:31.421767       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1003 19:23:31.421877       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1003 19:23:31.421949       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1003 19:23:31.422238       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1003 19:23:35.403281       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1003 19:23:35.403411       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1003 19:23:35.403482       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"networkpolicies\" in API group \"networking.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1003 19:23:35.403568       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:serviceaccount:kube-system:kindnet\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1003 19:23:38.521585       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1003 19:23:38.521616       1 metrics.go:72] Registering metrics
	I1003 19:23:38.521685       1 controller.go:711] "Syncing nftables rules"
	I1003 19:23:41.397987       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1003 19:23:41.398115       1 main.go:301] handling current node
	I1003 19:23:51.400882       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1003 19:23:51.400918       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b4edb0bc8b2e10ddd91a1f18e41714e9b020effe870b870ad1548c51abdd698a] <==
	I1003 19:23:35.374888       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1003 19:23:35.374985       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1003 19:23:35.388975       1 aggregator.go:171] initial CRD sync complete...
	I1003 19:23:35.389052       1 autoregister_controller.go:144] Starting autoregister controller
	I1003 19:23:35.392238       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1003 19:23:35.421199       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1003 19:23:35.433114       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1003 19:23:35.463680       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1003 19:23:35.463713       1 policy_source.go:240] refreshing policies
	I1003 19:23:35.468075       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1003 19:23:35.468929       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1003 19:23:35.475792       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1003 19:23:35.476899       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1003 19:23:35.476999       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1003 19:23:35.480402       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1003 19:23:35.480432       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1003 19:23:35.480560       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1003 19:23:35.486890       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1003 19:23:35.494806       1 cache.go:39] Caches are synced for autoregister controller
	I1003 19:23:36.074730       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1003 19:23:37.350782       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1003 19:23:41.763009       1 controller.go:667] quota admission added evaluator for: endpoints
	I1003 19:23:41.767151       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1003 19:23:41.770006       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1003 19:23:41.801721       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-apiserver [fd3fe7965793a71c3c6f9b9521b6b0c283e6b5ed6f1f5aee7fbfb482b5af6f32] <==
	I1003 19:22:28.696500       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1003 19:22:28.699359       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1003 19:22:28.703239       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1003 19:22:28.720834       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1003 19:22:28.721064       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1003 19:22:29.381470       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1003 19:22:29.386464       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1003 19:22:29.386490       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1003 19:22:30.256284       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1003 19:22:30.329953       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1003 19:22:30.501254       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1003 19:22:30.511348       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1003 19:22:30.513799       1 controller.go:667] quota admission added evaluator for: endpoints
	I1003 19:22:30.520467       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1003 19:22:30.582240       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1003 19:22:31.711880       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1003 19:22:31.765748       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1003 19:22:31.804093       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1003 19:22:36.335656       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1003 19:22:36.374269       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1003 19:22:36.494018       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1003 19:22:36.778534       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1003 19:23:23.291597       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	W1003 19:23:23.330698       1 logging.go:55] [core] [Channel #107 SubChannel #109]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1003 19:23:23.330865       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [5bc9d928c66f715d2cb955773ff9a4ceeac2d33a54d32a1544eac9d3e61700fe] <==
	I1003 19:23:38.702899       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1003 19:23:38.706630       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1003 19:23:38.708349       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1003 19:23:38.711597       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1003 19:23:38.712777       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1003 19:23:38.713942       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1003 19:23:38.715182       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1003 19:23:38.716425       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1003 19:23:38.718183       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1003 19:23:38.718502       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1003 19:23:38.720905       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1003 19:23:38.720926       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1003 19:23:38.724107       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1003 19:23:38.724202       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1003 19:23:38.736710       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1003 19:23:38.737143       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1003 19:23:38.737213       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1003 19:23:38.739760       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1003 19:23:38.742000       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1003 19:23:38.742070       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1003 19:23:38.742012       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1003 19:23:38.742035       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1003 19:23:38.742059       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1003 19:23:38.742047       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1003 19:23:38.743323       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	
	
	==> kube-controller-manager [fe077fc7b7398ab6a71e31a253a8c67d7227163b1d3d6d2ff769425cebd43420] <==
	I1003 19:22:35.473758       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1003 19:22:35.474831       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1003 19:22:35.474848       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1003 19:22:35.474860       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1003 19:22:35.474869       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1003 19:22:35.479465       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1003 19:22:35.474903       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1003 19:22:35.474894       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1003 19:22:35.483276       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1003 19:22:35.486013       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1003 19:22:35.488889       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1003 19:22:35.490117       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1003 19:22:35.490177       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1003 19:22:35.490227       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1003 19:22:35.490270       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1003 19:22:35.490316       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1003 19:22:35.498151       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1003 19:22:35.498576       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1003 19:22:35.518183       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1003 19:22:35.537140       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1003 19:22:35.573484       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1003 19:22:35.573506       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1003 19:22:35.573514       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1003 19:22:35.638041       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1003 19:23:20.479560       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [6168e29def1182e29c0bf294c1c3d7237309f9f85b32e17a56b611beab0de0f3] <==
	I1003 19:23:33.660179       1 server_linux.go:53] "Using iptables proxy"
	I1003 19:23:34.489557       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1003 19:23:35.414316       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes \"pause-844729\" is forbidden: User \"system:serviceaccount:kube-system:kube-proxy\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1003 19:23:36.400598       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1003 19:23:36.402741       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1003 19:23:36.402951       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1003 19:23:36.503938       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1003 19:23:36.504052       1 server_linux.go:132] "Using iptables Proxier"
	I1003 19:23:36.513154       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1003 19:23:36.514787       1 server.go:527] "Version info" version="v1.34.1"
	I1003 19:23:36.515064       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1003 19:23:36.520215       1 config.go:200] "Starting service config controller"
	I1003 19:23:36.530296       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1003 19:23:36.520520       1 config.go:106] "Starting endpoint slice config controller"
	I1003 19:23:36.533032       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1003 19:23:36.533129       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1003 19:23:36.522491       1 config.go:309] "Starting node config controller"
	I1003 19:23:36.533241       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1003 19:23:36.533270       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1003 19:23:36.520535       1 config.go:403] "Starting serviceCIDR config controller"
	I1003 19:23:36.533337       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1003 19:23:36.533365       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1003 19:23:36.631165       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [857ea2e27fd5446162221b5717f5c41724882e4d6d67b73122cbadfde6751525] <==
	I1003 19:22:38.077697       1 server_linux.go:53] "Using iptables proxy"
	I1003 19:22:38.186479       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1003 19:22:38.288540       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1003 19:22:38.288578       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1003 19:22:38.288666       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1003 19:22:38.311328       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1003 19:22:38.311377       1 server_linux.go:132] "Using iptables Proxier"
	I1003 19:22:38.315794       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1003 19:22:38.316085       1 server.go:527] "Version info" version="v1.34.1"
	I1003 19:22:38.316105       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1003 19:22:38.317820       1 config.go:200] "Starting service config controller"
	I1003 19:22:38.317889       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1003 19:22:38.317951       1 config.go:106] "Starting endpoint slice config controller"
	I1003 19:22:38.317978       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1003 19:22:38.318015       1 config.go:403] "Starting serviceCIDR config controller"
	I1003 19:22:38.318045       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1003 19:22:38.318949       1 config.go:309] "Starting node config controller"
	I1003 19:22:38.321017       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1003 19:22:38.321088       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1003 19:22:38.419037       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1003 19:22:38.419049       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1003 19:22:38.419086       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [6f18ec5c83f04389f6cce9ba80e373f135129e84c9590239ca46414eb849a154] <==
	E1003 19:22:28.651336       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1003 19:22:28.651412       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1003 19:22:28.651486       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1003 19:22:28.651533       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1003 19:22:28.651648       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1003 19:22:28.656084       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1003 19:22:28.656966       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1003 19:22:29.466339       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1003 19:22:29.557013       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1003 19:22:29.647468       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1003 19:22:29.657717       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1003 19:22:29.672597       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1003 19:22:29.703029       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1003 19:22:29.714234       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1003 19:22:29.794980       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1003 19:22:29.807989       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1003 19:22:29.810050       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1003 19:22:29.832475       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	I1003 19:22:32.516621       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1003 19:23:23.308485       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1003 19:23:23.308587       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1003 19:23:23.309647       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1003 19:23:23.310772       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1003 19:23:23.311379       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1003 19:23:23.311456       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [da0cffc30d07d485c02b0ec61d8a9b3909ac227213b2060ee5749f2e4c309f14] <==
	I1003 19:23:35.373580       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1003 19:23:35.376172       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1003 19:23:35.376680       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1003 19:23:35.376775       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1003 19:23:35.376824       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1003 19:23:35.383702       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1003 19:23:35.383867       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1003 19:23:35.389208       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1003 19:23:35.389319       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1003 19:23:35.389412       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1003 19:23:35.389824       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1003 19:23:35.389950       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1003 19:23:35.390029       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1003 19:23:35.390109       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1003 19:23:35.390189       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1003 19:23:35.390335       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1003 19:23:35.390471       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1003 19:23:35.398879       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1003 19:23:35.399037       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1003 19:23:35.405004       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1003 19:23:35.405232       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1003 19:23:35.405382       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1003 19:23:35.405520       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1003 19:23:35.405694       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	I1003 19:23:36.977899       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 03 19:23:31 pause-844729 kubelet[1312]: E1003 19:23:31.037386    1312 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kindnet-qhksz\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="0596aa14-3857-4ba6-a81c-11b8c29baf94" pod="kube-system/kindnet-qhksz"
	Oct 03 19:23:31 pause-844729 kubelet[1312]: E1003 19:23:31.037667    1312 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-z7pwb\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="427f1d63-2b09-401a-b2f3-2e2a8248c11e" pod="kube-system/coredns-66bc5c9577-z7pwb"
	Oct 03 19:23:31 pause-844729 kubelet[1312]: I1003 19:23:31.042418    1312 scope.go:117] "RemoveContainer" containerID="6f18ec5c83f04389f6cce9ba80e373f135129e84c9590239ca46414eb849a154"
	Oct 03 19:23:31 pause-844729 kubelet[1312]: E1003 19:23:31.043727    1312 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-844729\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="c713f73feb229ea0aeb655c8766a710f" pod="kube-system/etcd-pause-844729"
	Oct 03 19:23:31 pause-844729 kubelet[1312]: E1003 19:23:31.044212    1312 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-844729\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="f67d1dd0c885c384aa5abf187316f922" pod="kube-system/kube-scheduler-pause-844729"
	Oct 03 19:23:31 pause-844729 kubelet[1312]: E1003 19:23:31.044555    1312 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-844729\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="70ebf733007858dadb749b90ec6fad45" pod="kube-system/kube-apiserver-pause-844729"
	Oct 03 19:23:31 pause-844729 kubelet[1312]: E1003 19:23:31.047766    1312 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-844729\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="b7eccba1dcf39d64a52249a54fe30caa" pod="kube-system/kube-controller-manager-pause-844729"
	Oct 03 19:23:31 pause-844729 kubelet[1312]: E1003 19:23:31.048369    1312 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vxnlc\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="b9fa1a51-79ed-470a-a56a-1d830b23760e" pod="kube-system/kube-proxy-vxnlc"
	Oct 03 19:23:31 pause-844729 kubelet[1312]: E1003 19:23:31.048707    1312 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kindnet-qhksz\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="0596aa14-3857-4ba6-a81c-11b8c29baf94" pod="kube-system/kindnet-qhksz"
	Oct 03 19:23:31 pause-844729 kubelet[1312]: E1003 19:23:31.049055    1312 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-z7pwb\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="427f1d63-2b09-401a-b2f3-2e2a8248c11e" pod="kube-system/coredns-66bc5c9577-z7pwb"
	Oct 03 19:23:35 pause-844729 kubelet[1312]: E1003 19:23:35.289895    1312 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-844729\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-844729' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Oct 03 19:23:35 pause-844729 kubelet[1312]: E1003 19:23:35.290051    1312 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-qhksz\" is forbidden: User \"system:node:pause-844729\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-844729' and this object" podUID="0596aa14-3857-4ba6-a81c-11b8c29baf94" pod="kube-system/kindnet-qhksz"
	Oct 03 19:23:35 pause-844729 kubelet[1312]: E1003 19:23:35.299999    1312 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-z7pwb\" is forbidden: User \"system:node:pause-844729\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-844729' and this object" podUID="427f1d63-2b09-401a-b2f3-2e2a8248c11e" pod="kube-system/coredns-66bc5c9577-z7pwb"
	Oct 03 19:23:35 pause-844729 kubelet[1312]: E1003 19:23:35.325838    1312 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-844729\" is forbidden: User \"system:node:pause-844729\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-844729' and this object" podUID="c713f73feb229ea0aeb655c8766a710f" pod="kube-system/etcd-pause-844729"
	Oct 03 19:23:35 pause-844729 kubelet[1312]: E1003 19:23:35.344560    1312 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-844729\" is forbidden: User \"system:node:pause-844729\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-844729' and this object" podUID="f67d1dd0c885c384aa5abf187316f922" pod="kube-system/kube-scheduler-pause-844729"
	Oct 03 19:23:35 pause-844729 kubelet[1312]: E1003 19:23:35.350668    1312 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-844729\" is forbidden: User \"system:node:pause-844729\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-844729' and this object" podUID="70ebf733007858dadb749b90ec6fad45" pod="kube-system/kube-apiserver-pause-844729"
	Oct 03 19:23:35 pause-844729 kubelet[1312]: E1003 19:23:35.357089    1312 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-844729\" is forbidden: User \"system:node:pause-844729\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-844729' and this object" podUID="b7eccba1dcf39d64a52249a54fe30caa" pod="kube-system/kube-controller-manager-pause-844729"
	Oct 03 19:23:35 pause-844729 kubelet[1312]: E1003 19:23:35.359546    1312 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-vxnlc\" is forbidden: User \"system:node:pause-844729\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-844729' and this object" podUID="b9fa1a51-79ed-470a-a56a-1d830b23760e" pod="kube-system/kube-proxy-vxnlc"
	Oct 03 19:23:35 pause-844729 kubelet[1312]: E1003 19:23:35.361419    1312 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-qhksz\" is forbidden: User \"system:node:pause-844729\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-844729' and this object" podUID="0596aa14-3857-4ba6-a81c-11b8c29baf94" pod="kube-system/kindnet-qhksz"
	Oct 03 19:23:35 pause-844729 kubelet[1312]: E1003 19:23:35.366680    1312 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-z7pwb\" is forbidden: User \"system:node:pause-844729\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-844729' and this object" podUID="427f1d63-2b09-401a-b2f3-2e2a8248c11e" pod="kube-system/coredns-66bc5c9577-z7pwb"
	Oct 03 19:23:35 pause-844729 kubelet[1312]: E1003 19:23:35.372656    1312 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-844729\" is forbidden: User \"system:node:pause-844729\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-844729' and this object" podUID="c713f73feb229ea0aeb655c8766a710f" pod="kube-system/etcd-pause-844729"
	Oct 03 19:23:35 pause-844729 kubelet[1312]: E1003 19:23:35.382981    1312 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-844729\" is forbidden: User \"system:node:pause-844729\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-844729' and this object" podUID="f67d1dd0c885c384aa5abf187316f922" pod="kube-system/kube-scheduler-pause-844729"
	Oct 03 19:23:47 pause-844729 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 03 19:23:47 pause-844729 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 03 19:23:47 pause-844729 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-844729 -n pause-844729
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-844729 -n pause-844729: exit status 2 (340.002862ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-844729 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (6.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (4.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-174543 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-174543 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (765.880288ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-03T19:36:14Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-174543 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-174543 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-174543 describe deploy/metrics-server -n kube-system: exit status 1 (224.12262ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-174543 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-174543
helpers_test.go:243: (dbg) docker inspect old-k8s-version-174543:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e396cf711cf72d67a3eb0308bfe582b67073d4549b3bd8af7083d99767f74cff",
	        "Created": "2025-10-03T19:35:07.94543535Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 465280,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-03T19:35:08.013948222Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/e396cf711cf72d67a3eb0308bfe582b67073d4549b3bd8af7083d99767f74cff/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e396cf711cf72d67a3eb0308bfe582b67073d4549b3bd8af7083d99767f74cff/hostname",
	        "HostsPath": "/var/lib/docker/containers/e396cf711cf72d67a3eb0308bfe582b67073d4549b3bd8af7083d99767f74cff/hosts",
	        "LogPath": "/var/lib/docker/containers/e396cf711cf72d67a3eb0308bfe582b67073d4549b3bd8af7083d99767f74cff/e396cf711cf72d67a3eb0308bfe582b67073d4549b3bd8af7083d99767f74cff-json.log",
	        "Name": "/old-k8s-version-174543",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-174543:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-174543",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e396cf711cf72d67a3eb0308bfe582b67073d4549b3bd8af7083d99767f74cff",
	                "LowerDir": "/var/lib/docker/overlay2/48f8d5487aa8e63c3522dc4412a644c246929812a11cb3ecb803638938d2de80-init/diff:/var/lib/docker/overlay2/87b205803817b0b71a214d995ab7e10a92033bbf72d76d6e052f1d21ccecb313/diff",
	                "MergedDir": "/var/lib/docker/overlay2/48f8d5487aa8e63c3522dc4412a644c246929812a11cb3ecb803638938d2de80/merged",
	                "UpperDir": "/var/lib/docker/overlay2/48f8d5487aa8e63c3522dc4412a644c246929812a11cb3ecb803638938d2de80/diff",
	                "WorkDir": "/var/lib/docker/overlay2/48f8d5487aa8e63c3522dc4412a644c246929812a11cb3ecb803638938d2de80/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-174543",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-174543/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-174543",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-174543",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-174543",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c263ccd3912e66d2662bd51dffab0f9d9c0bb0ad9cad50a2e665c9fc0910b980",
	            "SandboxKey": "/var/run/docker/netns/c263ccd3912e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33418"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33419"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33422"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33420"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33421"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-174543": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "06:7b:4a:45:9c:d3",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "002964c2ebf4675c3eed6a35959bca86f080d98023eaf2d830eb21475b5fd360",
	                    "EndpointID": "4482a67f7b95e74d123d7bca741caab6d5aabd3cd451a9af48dc02c1072dfda3",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-174543",
	                        "e396cf711cf7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-174543 -n old-k8s-version-174543
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-174543 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-174543 logs -n 25: (1.894823554s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-388132 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ cilium-388132             │ jenkins │ v1.37.0 │ 03 Oct 25 19:25 UTC │                     │
	│ ssh     │ -p cilium-388132 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ cilium-388132             │ jenkins │ v1.37.0 │ 03 Oct 25 19:25 UTC │                     │
	│ ssh     │ -p cilium-388132 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-388132             │ jenkins │ v1.37.0 │ 03 Oct 25 19:25 UTC │                     │
	│ ssh     │ -p cilium-388132 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-388132             │ jenkins │ v1.37.0 │ 03 Oct 25 19:25 UTC │                     │
	│ ssh     │ -p cilium-388132 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-388132             │ jenkins │ v1.37.0 │ 03 Oct 25 19:25 UTC │                     │
	│ ssh     │ -p cilium-388132 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-388132             │ jenkins │ v1.37.0 │ 03 Oct 25 19:25 UTC │                     │
	│ ssh     │ -p cilium-388132 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-388132             │ jenkins │ v1.37.0 │ 03 Oct 25 19:25 UTC │                     │
	│ ssh     │ -p cilium-388132 sudo containerd config dump                                                                                                                                                                                                  │ cilium-388132             │ jenkins │ v1.37.0 │ 03 Oct 25 19:25 UTC │                     │
	│ ssh     │ -p cilium-388132 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-388132             │ jenkins │ v1.37.0 │ 03 Oct 25 19:25 UTC │                     │
	│ ssh     │ -p cilium-388132 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-388132             │ jenkins │ v1.37.0 │ 03 Oct 25 19:25 UTC │                     │
	│ ssh     │ -p cilium-388132 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-388132             │ jenkins │ v1.37.0 │ 03 Oct 25 19:25 UTC │                     │
	│ ssh     │ -p cilium-388132 sudo crio config                                                                                                                                                                                                             │ cilium-388132             │ jenkins │ v1.37.0 │ 03 Oct 25 19:25 UTC │                     │
	│ delete  │ -p cilium-388132                                                                                                                                                                                                                              │ cilium-388132             │ jenkins │ v1.37.0 │ 03 Oct 25 19:25 UTC │ 03 Oct 25 19:25 UTC │
	│ start   │ -p force-systemd-env-159095 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-159095  │ jenkins │ v1.37.0 │ 03 Oct 25 19:25 UTC │                     │
	│ ssh     │ force-systemd-flag-855981 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-855981 │ jenkins │ v1.37.0 │ 03 Oct 25 19:32 UTC │ 03 Oct 25 19:32 UTC │
	│ delete  │ -p force-systemd-flag-855981                                                                                                                                                                                                                  │ force-systemd-flag-855981 │ jenkins │ v1.37.0 │ 03 Oct 25 19:32 UTC │ 03 Oct 25 19:32 UTC │
	│ start   │ -p cert-expiration-324520 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-324520    │ jenkins │ v1.37.0 │ 03 Oct 25 19:32 UTC │ 03 Oct 25 19:33 UTC │
	│ delete  │ -p force-systemd-env-159095                                                                                                                                                                                                                   │ force-systemd-env-159095  │ jenkins │ v1.37.0 │ 03 Oct 25 19:34 UTC │ 03 Oct 25 19:34 UTC │
	│ start   │ -p cert-options-305866 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-305866       │ jenkins │ v1.37.0 │ 03 Oct 25 19:34 UTC │ 03 Oct 25 19:34 UTC │
	│ ssh     │ cert-options-305866 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-305866       │ jenkins │ v1.37.0 │ 03 Oct 25 19:34 UTC │ 03 Oct 25 19:34 UTC │
	│ ssh     │ -p cert-options-305866 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-305866       │ jenkins │ v1.37.0 │ 03 Oct 25 19:34 UTC │ 03 Oct 25 19:34 UTC │
	│ delete  │ -p cert-options-305866                                                                                                                                                                                                                        │ cert-options-305866       │ jenkins │ v1.37.0 │ 03 Oct 25 19:34 UTC │ 03 Oct 25 19:35 UTC │
	│ start   │ -p old-k8s-version-174543 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-174543    │ jenkins │ v1.37.0 │ 03 Oct 25 19:35 UTC │ 03 Oct 25 19:36 UTC │
	│ start   │ -p cert-expiration-324520 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-324520    │ jenkins │ v1.37.0 │ 03 Oct 25 19:36 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-174543 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-174543    │ jenkins │ v1.37.0 │ 03 Oct 25 19:36 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/03 19:36:03
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 19:36:03.537609  467587 out.go:360] Setting OutFile to fd 1 ...
	I1003 19:36:03.537720  467587 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 19:36:03.537724  467587 out.go:374] Setting ErrFile to fd 2...
	I1003 19:36:03.537727  467587 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 19:36:03.538004  467587 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-284583/.minikube/bin
	I1003 19:36:03.538369  467587 out.go:368] Setting JSON to false
	I1003 19:36:03.539379  467587 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8315,"bootTime":1759511849,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1003 19:36:03.539438  467587 start.go:140] virtualization:  
	I1003 19:36:03.543011  467587 out.go:179] * [cert-expiration-324520] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1003 19:36:03.546189  467587 notify.go:220] Checking for updates...
	I1003 19:36:03.546829  467587 out.go:179]   - MINIKUBE_LOCATION=21625
	I1003 19:36:03.549836  467587 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 19:36:03.552872  467587 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21625-284583/kubeconfig
	I1003 19:36:03.555710  467587 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-284583/.minikube
	I1003 19:36:03.558690  467587 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1003 19:36:03.563016  467587 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 19:36:03.566520  467587 config.go:182] Loaded profile config "cert-expiration-324520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 19:36:03.567076  467587 driver.go:421] Setting default libvirt URI to qemu:///system
	I1003 19:36:03.606732  467587 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1003 19:36:03.606903  467587 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 19:36:03.667815  467587 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-03 19:36:03.658210868 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1003 19:36:03.667916  467587 docker.go:318] overlay module found
	I1003 19:36:03.671015  467587 out.go:179] * Using the docker driver based on existing profile
	I1003 19:36:03.673849  467587 start.go:304] selected driver: docker
	I1003 19:36:03.673857  467587 start.go:924] validating driver "docker" against &{Name:cert-expiration-324520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-324520 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 19:36:03.673951  467587 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 19:36:03.674681  467587 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 19:36:03.744618  467587 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-03 19:36:03.733997209 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1003 19:36:03.745131  467587 cni.go:84] Creating CNI manager for ""
	I1003 19:36:03.745190  467587 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 19:36:03.745229  467587 start.go:348] cluster config:
	{Name:cert-expiration-324520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-324520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1003 19:36:03.754264  467587 out.go:179] * Starting "cert-expiration-324520" primary control-plane node in "cert-expiration-324520" cluster
	I1003 19:36:03.759782  467587 cache.go:123] Beginning downloading kic base image for docker with crio
	I1003 19:36:03.764401  467587 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1003 19:36:03.769529  467587 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 19:36:03.769580  467587 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21625-284583/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1003 19:36:03.769588  467587 cache.go:58] Caching tarball of preloaded images
	I1003 19:36:03.769615  467587 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1003 19:36:03.769729  467587 preload.go:233] Found /home/jenkins/minikube-integration/21625-284583/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1003 19:36:03.769738  467587 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1003 19:36:03.769882  467587 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/cert-expiration-324520/config.json ...
	I1003 19:36:03.794685  467587 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1003 19:36:03.794696  467587 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1003 19:36:03.794709  467587 cache.go:232] Successfully downloaded all kic artifacts
	I1003 19:36:03.794730  467587 start.go:360] acquireMachinesLock for cert-expiration-324520: {Name:mk1f92fbf251ffec500cd5a1ccf89df97f79ff34 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 19:36:03.794781  467587 start.go:364] duration metric: took 35.923µs to acquireMachinesLock for "cert-expiration-324520"
	I1003 19:36:03.794798  467587 start.go:96] Skipping create...Using existing machine configuration
	I1003 19:36:03.794802  467587 fix.go:54] fixHost starting: 
	I1003 19:36:03.795146  467587 cli_runner.go:164] Run: docker container inspect cert-expiration-324520 --format={{.State.Status}}
	I1003 19:36:03.817643  467587 fix.go:112] recreateIfNeeded on cert-expiration-324520: state=Running err=<nil>
	W1003 19:36:03.817662  467587 fix.go:138] unexpected machine state, will restart: <nil>
	I1003 19:36:02.279485  464896 system_pods.go:86] 8 kube-system pods found
	I1003 19:36:02.279535  464896 system_pods.go:89] "coredns-5dd5756b68-6grkm" [678e0c98-f42a-4a69-8d50-a83a82886a69] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1003 19:36:02.279544  464896 system_pods.go:89] "etcd-old-k8s-version-174543" [8550f5a6-a2dc-4e9b-b623-9d0d9dfd66fd] Running
	I1003 19:36:02.279554  464896 system_pods.go:89] "kindnet-rwdd6" [3cc7fea5-9441-4250-80b2-05aff82ce727] Running
	I1003 19:36:02.279560  464896 system_pods.go:89] "kube-apiserver-old-k8s-version-174543" [b8ce8574-fafd-4466-b9b8-b12c3ae221b7] Running
	I1003 19:36:02.279565  464896 system_pods.go:89] "kube-controller-manager-old-k8s-version-174543" [aea29031-128c-4683-b165-ef6f11b79e72] Running
	I1003 19:36:02.279571  464896 system_pods.go:89] "kube-proxy-v4mqk" [50d549bb-e122-45af-8dad-b599f07053fd] Running
	I1003 19:36:02.279576  464896 system_pods.go:89] "kube-scheduler-old-k8s-version-174543" [3b73907b-8446-4189-9d96-e02a6c332aa6] Running
	I1003 19:36:02.279586  464896 system_pods.go:89] "storage-provisioner" [8db23fd8-6872-4901-b61f-a88ac26407a7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1003 19:36:02.279607  464896 retry.go:31] will retry after 311.126077ms: missing components: kube-dns
	I1003 19:36:02.595614  464896 system_pods.go:86] 8 kube-system pods found
	I1003 19:36:02.595650  464896 system_pods.go:89] "coredns-5dd5756b68-6grkm" [678e0c98-f42a-4a69-8d50-a83a82886a69] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1003 19:36:02.595657  464896 system_pods.go:89] "etcd-old-k8s-version-174543" [8550f5a6-a2dc-4e9b-b623-9d0d9dfd66fd] Running
	I1003 19:36:02.595664  464896 system_pods.go:89] "kindnet-rwdd6" [3cc7fea5-9441-4250-80b2-05aff82ce727] Running
	I1003 19:36:02.595669  464896 system_pods.go:89] "kube-apiserver-old-k8s-version-174543" [b8ce8574-fafd-4466-b9b8-b12c3ae221b7] Running
	I1003 19:36:02.595674  464896 system_pods.go:89] "kube-controller-manager-old-k8s-version-174543" [aea29031-128c-4683-b165-ef6f11b79e72] Running
	I1003 19:36:02.595678  464896 system_pods.go:89] "kube-proxy-v4mqk" [50d549bb-e122-45af-8dad-b599f07053fd] Running
	I1003 19:36:02.595682  464896 system_pods.go:89] "kube-scheduler-old-k8s-version-174543" [3b73907b-8446-4189-9d96-e02a6c332aa6] Running
	I1003 19:36:02.595688  464896 system_pods.go:89] "storage-provisioner" [8db23fd8-6872-4901-b61f-a88ac26407a7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1003 19:36:02.595703  464896 retry.go:31] will retry after 338.690095ms: missing components: kube-dns
	I1003 19:36:02.939206  464896 system_pods.go:86] 8 kube-system pods found
	I1003 19:36:02.939237  464896 system_pods.go:89] "coredns-5dd5756b68-6grkm" [678e0c98-f42a-4a69-8d50-a83a82886a69] Running
	I1003 19:36:02.939244  464896 system_pods.go:89] "etcd-old-k8s-version-174543" [8550f5a6-a2dc-4e9b-b623-9d0d9dfd66fd] Running
	I1003 19:36:02.939249  464896 system_pods.go:89] "kindnet-rwdd6" [3cc7fea5-9441-4250-80b2-05aff82ce727] Running
	I1003 19:36:02.939253  464896 system_pods.go:89] "kube-apiserver-old-k8s-version-174543" [b8ce8574-fafd-4466-b9b8-b12c3ae221b7] Running
	I1003 19:36:02.939257  464896 system_pods.go:89] "kube-controller-manager-old-k8s-version-174543" [aea29031-128c-4683-b165-ef6f11b79e72] Running
	I1003 19:36:02.939261  464896 system_pods.go:89] "kube-proxy-v4mqk" [50d549bb-e122-45af-8dad-b599f07053fd] Running
	I1003 19:36:02.939265  464896 system_pods.go:89] "kube-scheduler-old-k8s-version-174543" [3b73907b-8446-4189-9d96-e02a6c332aa6] Running
	I1003 19:36:02.939269  464896 system_pods.go:89] "storage-provisioner" [8db23fd8-6872-4901-b61f-a88ac26407a7] Running
	I1003 19:36:02.939277  464896 system_pods.go:126] duration metric: took 983.825778ms to wait for k8s-apps to be running ...
	I1003 19:36:02.939284  464896 system_svc.go:44] waiting for kubelet service to be running ....
	I1003 19:36:02.939342  464896 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 19:36:02.952913  464896 system_svc.go:56] duration metric: took 13.619997ms WaitForService to wait for kubelet
	I1003 19:36:02.952944  464896 kubeadm.go:586] duration metric: took 15.227990667s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 19:36:02.952964  464896 node_conditions.go:102] verifying NodePressure condition ...
	I1003 19:36:02.955783  464896 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1003 19:36:02.955817  464896 node_conditions.go:123] node cpu capacity is 2
	I1003 19:36:02.955831  464896 node_conditions.go:105] duration metric: took 2.83173ms to run NodePressure ...
	I1003 19:36:02.955844  464896 start.go:241] waiting for startup goroutines ...
	I1003 19:36:02.955851  464896 start.go:246] waiting for cluster config update ...
	I1003 19:36:02.955861  464896 start.go:255] writing updated cluster config ...
	I1003 19:36:02.956175  464896 ssh_runner.go:195] Run: rm -f paused
	I1003 19:36:02.960121  464896 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1003 19:36:02.965161  464896 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-6grkm" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:36:02.971074  464896 pod_ready.go:94] pod "coredns-5dd5756b68-6grkm" is "Ready"
	I1003 19:36:02.971114  464896 pod_ready.go:86] duration metric: took 5.915146ms for pod "coredns-5dd5756b68-6grkm" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:36:02.981705  464896 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-174543" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:36:02.988085  464896 pod_ready.go:94] pod "etcd-old-k8s-version-174543" is "Ready"
	I1003 19:36:02.988119  464896 pod_ready.go:86] duration metric: took 6.345117ms for pod "etcd-old-k8s-version-174543" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:36:02.991659  464896 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-174543" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:36:02.996879  464896 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-174543" is "Ready"
	I1003 19:36:02.996909  464896 pod_ready.go:86] duration metric: took 5.218898ms for pod "kube-apiserver-old-k8s-version-174543" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:36:03.007216  464896 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-174543" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:36:03.365059  464896 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-174543" is "Ready"
	I1003 19:36:03.365086  464896 pod_ready.go:86] duration metric: took 357.789563ms for pod "kube-controller-manager-old-k8s-version-174543" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:36:03.565470  464896 pod_ready.go:83] waiting for pod "kube-proxy-v4mqk" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:36:03.964206  464896 pod_ready.go:94] pod "kube-proxy-v4mqk" is "Ready"
	I1003 19:36:03.964236  464896 pod_ready.go:86] duration metric: took 398.738006ms for pod "kube-proxy-v4mqk" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:36:04.165284  464896 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-174543" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:36:04.565326  464896 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-174543" is "Ready"
	I1003 19:36:04.565359  464896 pod_ready.go:86] duration metric: took 400.04663ms for pod "kube-scheduler-old-k8s-version-174543" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:36:04.565372  464896 pod_ready.go:40] duration metric: took 1.605208398s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1003 19:36:04.645338  464896 start.go:623] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1003 19:36:04.657482  464896 out.go:203] 
	W1003 19:36:04.661237  464896 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1003 19:36:04.664935  464896 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1003 19:36:04.669429  464896 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-174543" cluster and "default" namespace by default
	I1003 19:36:03.824056  467587 out.go:252] * Updating the running docker "cert-expiration-324520" container ...
	I1003 19:36:03.824083  467587 machine.go:93] provisionDockerMachine start ...
	I1003 19:36:03.824184  467587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-324520
	I1003 19:36:03.844239  467587 main.go:141] libmachine: Using SSH client type: native
	I1003 19:36:03.844640  467587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33408 <nil> <nil>}
	I1003 19:36:03.844648  467587 main.go:141] libmachine: About to run SSH command:
	hostname
	I1003 19:36:03.989148  467587 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-324520
	
	I1003 19:36:03.989176  467587 ubuntu.go:182] provisioning hostname "cert-expiration-324520"
	I1003 19:36:03.989254  467587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-324520
	I1003 19:36:04.019516  467587 main.go:141] libmachine: Using SSH client type: native
	I1003 19:36:04.019897  467587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33408 <nil> <nil>}
	I1003 19:36:04.019907  467587 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-expiration-324520 && echo "cert-expiration-324520" | sudo tee /etc/hostname
	I1003 19:36:04.169766  467587 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-324520
	
	I1003 19:36:04.169840  467587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-324520
	I1003 19:36:04.189546  467587 main.go:141] libmachine: Using SSH client type: native
	I1003 19:36:04.189844  467587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33408 <nil> <nil>}
	I1003 19:36:04.189858  467587 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-324520' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-324520/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-324520' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 19:36:04.329964  467587 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 19:36:04.329980  467587 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21625-284583/.minikube CaCertPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21625-284583/.minikube}
	I1003 19:36:04.330002  467587 ubuntu.go:190] setting up certificates
	I1003 19:36:04.330011  467587 provision.go:84] configureAuth start
	I1003 19:36:04.330072  467587 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-324520
	I1003 19:36:04.347952  467587 provision.go:143] copyHostCerts
	I1003 19:36:04.348017  467587 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-284583/.minikube/key.pem, removing ...
	I1003 19:36:04.348034  467587 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-284583/.minikube/key.pem
	I1003 19:36:04.348110  467587 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21625-284583/.minikube/key.pem (1675 bytes)
	I1003 19:36:04.348214  467587 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-284583/.minikube/ca.pem, removing ...
	I1003 19:36:04.348218  467587 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-284583/.minikube/ca.pem
	I1003 19:36:04.348244  467587 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21625-284583/.minikube/ca.pem (1082 bytes)
	I1003 19:36:04.348303  467587 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-284583/.minikube/cert.pem, removing ...
	I1003 19:36:04.348307  467587 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-284583/.minikube/cert.pem
	I1003 19:36:04.348330  467587 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21625-284583/.minikube/cert.pem (1123 bytes)
	I1003 19:36:04.348384  467587 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21625-284583/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-324520 san=[127.0.0.1 192.168.76.2 cert-expiration-324520 localhost minikube]
	I1003 19:36:04.621595  467587 provision.go:177] copyRemoteCerts
	I1003 19:36:04.621668  467587 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 19:36:04.621735  467587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-324520
	I1003 19:36:04.646977  467587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/cert-expiration-324520/id_rsa Username:docker}
	I1003 19:36:04.773124  467587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1003 19:36:04.805728  467587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1003 19:36:04.833432  467587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 19:36:04.858496  467587 provision.go:87] duration metric: took 528.461848ms to configureAuth
	I1003 19:36:04.858512  467587 ubuntu.go:206] setting minikube options for container-runtime
	I1003 19:36:04.858695  467587 config.go:182] Loaded profile config "cert-expiration-324520": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 19:36:04.858796  467587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-324520
	I1003 19:36:04.886664  467587 main.go:141] libmachine: Using SSH client type: native
	I1003 19:36:04.886964  467587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33408 <nil> <nil>}
	I1003 19:36:04.886976  467587 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1003 19:36:10.285322  467587 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1003 19:36:10.285341  467587 machine.go:96] duration metric: took 6.461244885s to provisionDockerMachine
	I1003 19:36:10.285351  467587 start.go:293] postStartSetup for "cert-expiration-324520" (driver="docker")
	I1003 19:36:10.285361  467587 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 19:36:10.285448  467587 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 19:36:10.285529  467587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-324520
	I1003 19:36:10.305112  467587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/cert-expiration-324520/id_rsa Username:docker}
	I1003 19:36:10.400905  467587 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 19:36:10.405014  467587 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1003 19:36:10.405034  467587 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1003 19:36:10.405043  467587 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-284583/.minikube/addons for local assets ...
	I1003 19:36:10.405097  467587 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-284583/.minikube/files for local assets ...
	I1003 19:36:10.405179  467587 filesync.go:149] local asset: /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem -> 2864342.pem in /etc/ssl/certs
	I1003 19:36:10.405273  467587 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 19:36:10.412859  467587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem --> /etc/ssl/certs/2864342.pem (1708 bytes)
	I1003 19:36:10.430467  467587 start.go:296] duration metric: took 145.101806ms for postStartSetup
	I1003 19:36:10.430539  467587 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 19:36:10.430587  467587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-324520
	I1003 19:36:10.447982  467587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/cert-expiration-324520/id_rsa Username:docker}
	I1003 19:36:10.542466  467587 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1003 19:36:10.547865  467587 fix.go:56] duration metric: took 6.753055322s for fixHost
	I1003 19:36:10.547880  467587 start.go:83] releasing machines lock for "cert-expiration-324520", held for 6.753092057s
	I1003 19:36:10.547951  467587 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-324520
	I1003 19:36:10.565595  467587 ssh_runner.go:195] Run: cat /version.json
	I1003 19:36:10.565647  467587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-324520
	I1003 19:36:10.565659  467587 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 19:36:10.565724  467587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-324520
	I1003 19:36:10.584107  467587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/cert-expiration-324520/id_rsa Username:docker}
	I1003 19:36:10.594395  467587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/cert-expiration-324520/id_rsa Username:docker}
	I1003 19:36:10.676612  467587 ssh_runner.go:195] Run: systemctl --version
	I1003 19:36:10.771207  467587 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1003 19:36:10.819601  467587 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1003 19:36:10.826064  467587 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 19:36:10.826124  467587 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 19:36:10.834132  467587 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1003 19:36:10.834145  467587 start.go:495] detecting cgroup driver to use...
	I1003 19:36:10.834175  467587 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1003 19:36:10.834218  467587 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 19:36:10.850064  467587 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 19:36:10.863811  467587 docker.go:218] disabling cri-docker service (if available) ...
	I1003 19:36:10.863863  467587 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1003 19:36:10.880511  467587 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1003 19:36:10.894447  467587 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1003 19:36:11.031092  467587 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1003 19:36:11.178219  467587 docker.go:234] disabling docker service ...
	I1003 19:36:11.178276  467587 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1003 19:36:11.194519  467587 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1003 19:36:11.208821  467587 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1003 19:36:11.353209  467587 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1003 19:36:11.498083  467587 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1003 19:36:11.512181  467587 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 19:36:11.528496  467587 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1003 19:36:11.528570  467587 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:36:11.538790  467587 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1003 19:36:11.538848  467587 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:36:11.548687  467587 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:36:11.559366  467587 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:36:11.569924  467587 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 19:36:11.578913  467587 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:36:11.588338  467587 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:36:11.597008  467587 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:36:11.606493  467587 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 19:36:11.614270  467587 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 19:36:11.622226  467587 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 19:36:11.765744  467587 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1003 19:36:11.944053  467587 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1003 19:36:11.944111  467587 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1003 19:36:11.947973  467587 start.go:563] Will wait 60s for crictl version
	I1003 19:36:11.948025  467587 ssh_runner.go:195] Run: which crictl
	I1003 19:36:11.951534  467587 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1003 19:36:11.989530  467587 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1003 19:36:11.989658  467587 ssh_runner.go:195] Run: crio --version
	I1003 19:36:12.031141  467587 ssh_runner.go:195] Run: crio --version
	I1003 19:36:12.064102  467587 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1003 19:36:12.067193  467587 cli_runner.go:164] Run: docker network inspect cert-expiration-324520 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 19:36:12.085315  467587 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1003 19:36:12.090295  467587 kubeadm.go:883] updating cluster {Name:cert-expiration-324520 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-324520 Namespace:default APIServerHAVIP: APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCli
entPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1003 19:36:12.090390  467587 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 19:36:12.090471  467587 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 19:36:12.134126  467587 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 19:36:12.134138  467587 crio.go:433] Images already preloaded, skipping extraction
	I1003 19:36:12.134198  467587 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 19:36:12.160438  467587 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 19:36:12.160449  467587 cache_images.go:85] Images are preloaded, skipping loading
	I1003 19:36:12.160456  467587 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1003 19:36:12.160560  467587 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=cert-expiration-324520 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-324520 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1003 19:36:12.160640  467587 ssh_runner.go:195] Run: crio config
	I1003 19:36:12.226302  467587 cni.go:84] Creating CNI manager for ""
	I1003 19:36:12.226314  467587 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 19:36:12.226331  467587 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1003 19:36:12.226353  467587 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-324520 NodeName:cert-expiration-324520 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1003 19:36:12.226525  467587 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "cert-expiration-324520"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 19:36:12.226591  467587 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1003 19:36:12.235706  467587 binaries.go:44] Found k8s binaries, skipping transfer
	I1003 19:36:12.235776  467587 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1003 19:36:12.243999  467587 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1003 19:36:12.258644  467587 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 19:36:12.272014  467587 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1003 19:36:12.285041  467587 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1003 19:36:12.289206  467587 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 19:36:12.436808  467587 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 19:36:12.452143  467587 certs.go:69] Setting up /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/cert-expiration-324520 for IP: 192.168.76.2
	I1003 19:36:12.452154  467587 certs.go:195] generating shared ca certs ...
	I1003 19:36:12.452168  467587 certs.go:227] acquiring lock for ca certs: {Name:mk5a10e6c921326e9c211447576eaeb893259ba7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:36:12.452331  467587 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21625-284583/.minikube/ca.key
	I1003 19:36:12.452397  467587 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.key
	I1003 19:36:12.452403  467587 certs.go:257] generating profile certs ...
	W1003 19:36:12.452535  467587 out.go:285] ! Certificate client.crt has expired. Generating a new one...
	I1003 19:36:12.452714  467587 certs.go:624] cert expired /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/cert-expiration-324520/client.crt: expiration: 2025-10-03 19:35:44 +0000 UTC, now: 2025-10-03 19:36:12.45270811 +0000 UTC m=+8.958896301
	I1003 19:36:12.452852  467587 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/cert-expiration-324520/client.key
	I1003 19:36:12.452879  467587 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/cert-expiration-324520/client.crt with IP's: []
	I1003 19:36:12.815910  467587 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/cert-expiration-324520/client.crt ...
	I1003 19:36:12.815927  467587 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/cert-expiration-324520/client.crt: {Name:mk2b9b4a6c3ea836978cddbd883877a629c23ee1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:36:12.816070  467587 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/cert-expiration-324520/client.key ...
	I1003 19:36:12.816077  467587 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/cert-expiration-324520/client.key: {Name:mkc9a91b3db18452b751f96097ad203733867b5c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W1003 19:36:12.816238  467587 out.go:285] ! Certificate apiserver.crt.8ab1f55d has expired. Generating a new one...
	I1003 19:36:12.816305  467587 certs.go:624] cert expired /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/cert-expiration-324520/apiserver.crt.8ab1f55d: expiration: 2025-10-03 19:35:45 +0000 UTC, now: 2025-10-03 19:36:12.816298559 +0000 UTC m=+9.322486660
	I1003 19:36:12.816388  467587 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/cert-expiration-324520/apiserver.key.8ab1f55d
	I1003 19:36:12.816402  467587 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/cert-expiration-324520/apiserver.crt.8ab1f55d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	
	
	==> CRI-O <==
	Oct 03 19:36:02 old-k8s-version-174543 crio[837]: time="2025-10-03T19:36:02.118446735Z" level=info msg="Created container 2e831abcd19098d5cd3c1d9c4f5129cd21ef8a1e29695ff33f143a1b858706e8: kube-system/coredns-5dd5756b68-6grkm/coredns" id=c442ff09-7356-4454-81bf-cbec838ead75 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:36:02 old-k8s-version-174543 crio[837]: time="2025-10-03T19:36:02.11986447Z" level=info msg="Starting container: 2e831abcd19098d5cd3c1d9c4f5129cd21ef8a1e29695ff33f143a1b858706e8" id=43e1f86a-fa0a-4770-8cb1-9b0c717795df name=/runtime.v1.RuntimeService/StartContainer
	Oct 03 19:36:02 old-k8s-version-174543 crio[837]: time="2025-10-03T19:36:02.122050241Z" level=info msg="Started container" PID=1946 containerID=2e831abcd19098d5cd3c1d9c4f5129cd21ef8a1e29695ff33f143a1b858706e8 description=kube-system/coredns-5dd5756b68-6grkm/coredns id=43e1f86a-fa0a-4770-8cb1-9b0c717795df name=/runtime.v1.RuntimeService/StartContainer sandboxID=ccc29d343c9629ee7cb07baf43630d4839b6b787a6247e5f4a5d6b8069d9dbeb
	Oct 03 19:36:05 old-k8s-version-174543 crio[837]: time="2025-10-03T19:36:05.239349751Z" level=info msg="Running pod sandbox: default/busybox/POD" id=55c33ede-bd48-4c95-b191-4b15e157eae4 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 03 19:36:05 old-k8s-version-174543 crio[837]: time="2025-10-03T19:36:05.239434478Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 19:36:05 old-k8s-version-174543 crio[837]: time="2025-10-03T19:36:05.244600617Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:05a1d399d09e8c55d7d181488da5c48e068e0ace6e0d41c567f10c59e1f3f92f UID:59ac2e32-e58b-4476-9428-b0694f51e499 NetNS:/var/run/netns/ac974ed7-b8e7-4111-831a-ae0a1e762c63 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40027ee9a0}] Aliases:map[]}"
	Oct 03 19:36:05 old-k8s-version-174543 crio[837]: time="2025-10-03T19:36:05.244812132Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 03 19:36:05 old-k8s-version-174543 crio[837]: time="2025-10-03T19:36:05.258198132Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:05a1d399d09e8c55d7d181488da5c48e068e0ace6e0d41c567f10c59e1f3f92f UID:59ac2e32-e58b-4476-9428-b0694f51e499 NetNS:/var/run/netns/ac974ed7-b8e7-4111-831a-ae0a1e762c63 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40027ee9a0}] Aliases:map[]}"
	Oct 03 19:36:05 old-k8s-version-174543 crio[837]: time="2025-10-03T19:36:05.258515724Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 03 19:36:05 old-k8s-version-174543 crio[837]: time="2025-10-03T19:36:05.262990439Z" level=info msg="Ran pod sandbox 05a1d399d09e8c55d7d181488da5c48e068e0ace6e0d41c567f10c59e1f3f92f with infra container: default/busybox/POD" id=55c33ede-bd48-4c95-b191-4b15e157eae4 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 03 19:36:05 old-k8s-version-174543 crio[837]: time="2025-10-03T19:36:05.264299997Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c4192aea-7b12-4c2a-82b6-faaf097cfd00 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 19:36:05 old-k8s-version-174543 crio[837]: time="2025-10-03T19:36:05.264524066Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=c4192aea-7b12-4c2a-82b6-faaf097cfd00 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 19:36:05 old-k8s-version-174543 crio[837]: time="2025-10-03T19:36:05.264639653Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=c4192aea-7b12-4c2a-82b6-faaf097cfd00 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 19:36:05 old-k8s-version-174543 crio[837]: time="2025-10-03T19:36:05.265412317Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=379b0659-427b-41cf-b89b-8ea87d37d970 name=/runtime.v1.ImageService/PullImage
	Oct 03 19:36:05 old-k8s-version-174543 crio[837]: time="2025-10-03T19:36:05.267834481Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 03 19:36:07 old-k8s-version-174543 crio[837]: time="2025-10-03T19:36:07.276421245Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=379b0659-427b-41cf-b89b-8ea87d37d970 name=/runtime.v1.ImageService/PullImage
	Oct 03 19:36:07 old-k8s-version-174543 crio[837]: time="2025-10-03T19:36:07.27765684Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=2fd9229c-a975-4da7-b012-f1c5cd50e4dd name=/runtime.v1.ImageService/ImageStatus
	Oct 03 19:36:07 old-k8s-version-174543 crio[837]: time="2025-10-03T19:36:07.279310271Z" level=info msg="Creating container: default/busybox/busybox" id=395a22e6-8150-469d-a6c2-6c15163cb4f3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:36:07 old-k8s-version-174543 crio[837]: time="2025-10-03T19:36:07.280444401Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 19:36:07 old-k8s-version-174543 crio[837]: time="2025-10-03T19:36:07.285253619Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 19:36:07 old-k8s-version-174543 crio[837]: time="2025-10-03T19:36:07.285876611Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 19:36:07 old-k8s-version-174543 crio[837]: time="2025-10-03T19:36:07.310539961Z" level=info msg="Created container 3943628f444a202a7f137e1dcc3a57c19bb6f94ba4b2fd1bc9d7eb408c16eb3c: default/busybox/busybox" id=395a22e6-8150-469d-a6c2-6c15163cb4f3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:36:07 old-k8s-version-174543 crio[837]: time="2025-10-03T19:36:07.311651025Z" level=info msg="Starting container: 3943628f444a202a7f137e1dcc3a57c19bb6f94ba4b2fd1bc9d7eb408c16eb3c" id=1645d8ec-a0c2-45e4-910d-38b06e94c063 name=/runtime.v1.RuntimeService/StartContainer
	Oct 03 19:36:07 old-k8s-version-174543 crio[837]: time="2025-10-03T19:36:07.313635727Z" level=info msg="Started container" PID=2004 containerID=3943628f444a202a7f137e1dcc3a57c19bb6f94ba4b2fd1bc9d7eb408c16eb3c description=default/busybox/busybox id=1645d8ec-a0c2-45e4-910d-38b06e94c063 name=/runtime.v1.RuntimeService/StartContainer sandboxID=05a1d399d09e8c55d7d181488da5c48e068e0ace6e0d41c567f10c59e1f3f92f
	Oct 03 19:36:13 old-k8s-version-174543 crio[837]: time="2025-10-03T19:36:13.315862084Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	3943628f444a2       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago       Running             busybox                   0                   05a1d399d09e8       busybox                                          default
	2e831abcd1909       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      14 seconds ago      Running             coredns                   0                   ccc29d343c962       coredns-5dd5756b68-6grkm                         kube-system
	0dc66974bbd38       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      14 seconds ago      Running             storage-provisioner       0                   cea7d58e11f05       storage-provisioner                              kube-system
	92a0ae89e51d3       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    25 seconds ago      Running             kindnet-cni               0                   0a117685a3abd       kindnet-rwdd6                                    kube-system
	242fef05b1cb4       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                      28 seconds ago      Running             kube-proxy                0                   d524aa2433d0b       kube-proxy-v4mqk                                 kube-system
	b0a7cd8590ace       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                      48 seconds ago      Running             kube-scheduler            0                   8394215eff160       kube-scheduler-old-k8s-version-174543            kube-system
	de78dfbf7ca36       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                      48 seconds ago      Running             kube-apiserver            0                   62972a14fd92a       kube-apiserver-old-k8s-version-174543            kube-system
	11b66d49f3053       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      48 seconds ago      Running             etcd                      0                   c31d53d5c1f3c       etcd-old-k8s-version-174543                      kube-system
	58882c1e7f222       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                      48 seconds ago      Running             kube-controller-manager   0                   5a7b03c43168b       kube-controller-manager-old-k8s-version-174543   kube-system
	
	
	==> coredns [2e831abcd19098d5cd3c1d9c4f5129cd21ef8a1e29695ff33f143a1b858706e8] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:45560 - 24174 "HINFO IN 50298662815143732.3452009115383417497. udp 55 false 512" NXDOMAIN qr,rd,ra 55 0.012435619s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-174543
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-174543
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a43873c79fc22f8b1ccd29d3dfa635d392b09335
	                    minikube.k8s.io/name=old-k8s-version-174543
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_03T19_35_35_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 03 Oct 2025 19:35:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-174543
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 03 Oct 2025 19:36:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 03 Oct 2025 19:36:05 +0000   Fri, 03 Oct 2025 19:35:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 03 Oct 2025 19:36:05 +0000   Fri, 03 Oct 2025 19:35:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 03 Oct 2025 19:36:05 +0000   Fri, 03 Oct 2025 19:35:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 03 Oct 2025 19:36:05 +0000   Fri, 03 Oct 2025 19:36:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-174543
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 460c4fa7590a44e2bf11821b2427dec7
	  System UUID:                d17a7f15-898a-43d2-a8ef-eaca6b0b9649
	  Boot ID:                    3762136e-8bec-4104-a5cb-0b1976f6048e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-5dd5756b68-6grkm                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     29s
	  kube-system                 etcd-old-k8s-version-174543                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         42s
	  kube-system                 kindnet-rwdd6                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-old-k8s-version-174543             250m (12%)    0 (0%)      0 (0%)           0 (0%)         44s
	  kube-system                 kube-controller-manager-old-k8s-version-174543    200m (10%)    0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 kube-proxy-v4mqk                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-old-k8s-version-174543             100m (5%)     0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 28s   kube-proxy       
	  Normal  Starting                 42s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  42s   kubelet          Node old-k8s-version-174543 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    42s   kubelet          Node old-k8s-version-174543 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     42s   kubelet          Node old-k8s-version-174543 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           30s   node-controller  Node old-k8s-version-174543 event: Registered Node old-k8s-version-174543 in Controller
	  Normal  NodeReady                15s   kubelet          Node old-k8s-version-174543 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 3 19:05] overlayfs: idmapped layers are currently not supported
	[ +33.149550] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:07] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:08] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:09] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:10] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:11] overlayfs: idmapped layers are currently not supported
	[  +4.287643] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:12] overlayfs: idmapped layers are currently not supported
	[ +24.839009] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:13] overlayfs: idmapped layers are currently not supported
	[ +26.493253] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:15] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:16] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:17] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000010] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Oct 3 19:18] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:20] overlayfs: idmapped layers are currently not supported
	[ +32.018892] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:22] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:24] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:26] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:32] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:34] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:35] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [11b66d49f30531f76f335335707cee62f67e14a6d5c95fd6d43e21bc2ba77562] <==
	{"level":"info","ts":"2025-10-03T19:35:27.508985Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-10-03T19:35:27.509103Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-10-03T19:35:27.510908Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-03T19:35:27.511115Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-03T19:35:27.513071Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-03T19:35:27.513922Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-03T19:35:27.513999Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-03T19:35:27.672764Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2025-10-03T19:35:27.672892Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2025-10-03T19:35:27.672944Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2025-10-03T19:35:27.673018Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2025-10-03T19:35:27.673062Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-10-03T19:35:27.673097Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2025-10-03T19:35:27.673141Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-10-03T19:35:27.676879Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-03T19:35:27.67996Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-174543 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-03T19:35:27.68004Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-03T19:35:27.681327Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-03T19:35:27.682277Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-03T19:35:27.68971Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-10-03T19:35:27.69035Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-03T19:35:27.690483Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-03T19:35:27.690543Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-03T19:35:27.716716Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-03T19:35:27.716839Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 19:36:16 up  2:18,  0 user,  load average: 2.14, 1.16, 1.58
	Linux old-k8s-version-174543 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [92a0ae89e51d36114b94d09f22d8ea3dff3db53625ccf41d55c381715c3ea8a8] <==
	I1003 19:35:51.196347       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1003 19:35:51.196591       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1003 19:35:51.196717       1 main.go:148] setting mtu 1500 for CNI 
	I1003 19:35:51.196768       1 main.go:178] kindnetd IP family: "ipv4"
	I1003 19:35:51.196779       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-03T19:35:51Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1003 19:35:51.491149       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1003 19:35:51.493666       1 controller.go:381] "Waiting for informer caches to sync"
	I1003 19:35:51.493741       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1003 19:35:51.493883       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1003 19:35:51.694139       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1003 19:35:51.694230       1 metrics.go:72] Registering metrics
	I1003 19:35:51.694310       1 controller.go:711] "Syncing nftables rules"
	I1003 19:36:01.496844       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1003 19:36:01.496978       1 main.go:301] handling current node
	I1003 19:36:11.492868       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1003 19:36:11.492899       1 main.go:301] handling current node
	
	
	==> kube-apiserver [de78dfbf7ca36e3dfb6f194cef33b409af2cfb1177a9c484f1685377db56413b] <==
	I1003 19:35:31.216872       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1003 19:35:31.216879       1 cache.go:39] Caches are synced for autoregister controller
	I1003 19:35:31.238239       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1003 19:35:31.238276       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1003 19:35:31.238562       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1003 19:35:31.241635       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1003 19:35:31.244489       1 controller.go:624] quota admission added evaluator for: namespaces
	I1003 19:35:31.245934       1 shared_informer.go:318] Caches are synced for configmaps
	I1003 19:35:31.248558       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1003 19:35:31.289942       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1003 19:35:31.947887       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1003 19:35:31.952206       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1003 19:35:31.952293       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1003 19:35:32.618200       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1003 19:35:32.670661       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1003 19:35:32.785270       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1003 19:35:32.793479       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1003 19:35:32.794669       1 controller.go:624] quota admission added evaluator for: endpoints
	I1003 19:35:32.800027       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1003 19:35:33.162685       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1003 19:35:34.276547       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1003 19:35:34.292936       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1003 19:35:34.305236       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1003 19:35:46.758030       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1003 19:35:47.405865       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [58882c1e7f22267d6a649cb2292577e9aaa7fa18457cc79ccea9e08ee16c4232] <==
	I1003 19:35:46.720252       1 event.go:307] "Event occurred" object="kube-system/etcd-old-k8s-version-174543" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1003 19:35:46.720362       1 event.go:307] "Event occurred" object="kube-system/kube-scheduler-old-k8s-version-174543" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1003 19:35:46.724717       1 event.go:307] "Event occurred" object="kube-system/kube-controller-manager-old-k8s-version-174543" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1003 19:35:46.728046       1 event.go:307] "Event occurred" object="kube-system/kube-apiserver-old-k8s-version-174543" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1003 19:35:46.763567       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1003 19:35:47.098666       1 shared_informer.go:318] Caches are synced for garbage collector
	I1003 19:35:47.098697       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1003 19:35:47.110729       1 shared_informer.go:318] Caches are synced for garbage collector
	I1003 19:35:47.418276       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-v4mqk"
	I1003 19:35:47.423227       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-rwdd6"
	I1003 19:35:47.563445       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-mlkkf"
	I1003 19:35:47.591290       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-6grkm"
	I1003 19:35:47.604868       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="841.029834ms"
	I1003 19:35:47.645748       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="40.822187ms"
	I1003 19:35:47.648851       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="85.983µs"
	I1003 19:35:49.510903       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1003 19:35:49.561834       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-mlkkf"
	I1003 19:35:49.634174       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="124.075684ms"
	I1003 19:35:49.665007       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="30.779334ms"
	I1003 19:35:49.665123       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="79.771µs"
	I1003 19:36:01.732259       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="112.395µs"
	I1003 19:36:01.753526       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="72.559µs"
	I1003 19:36:02.688692       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="14.93936ms"
	I1003 19:36:02.688933       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="67.833µs"
	I1003 19:36:06.701434       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [242fef05b1cb4e51fd606af8898527d2fd5af05144d5fc8a647978eca4172e3c] <==
	I1003 19:35:48.027867       1 server_others.go:69] "Using iptables proxy"
	I1003 19:35:48.061336       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1003 19:35:48.091334       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1003 19:35:48.093825       1 server_others.go:152] "Using iptables Proxier"
	I1003 19:35:48.093931       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1003 19:35:48.093969       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1003 19:35:48.094012       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1003 19:35:48.094396       1 server.go:846] "Version info" version="v1.28.0"
	I1003 19:35:48.094591       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1003 19:35:48.098629       1 config.go:188] "Starting service config controller"
	I1003 19:35:48.098659       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1003 19:35:48.098681       1 config.go:97] "Starting endpoint slice config controller"
	I1003 19:35:48.098685       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1003 19:35:48.099133       1 config.go:315] "Starting node config controller"
	I1003 19:35:48.099141       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1003 19:35:48.198884       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1003 19:35:48.198945       1 shared_informer.go:318] Caches are synced for service config
	I1003 19:35:48.199234       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [b0a7cd8590ace9a21efb08a746beeda6018b3f53b6317de185f348b49e6d1c3e] <==
	W1003 19:35:31.222182       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1003 19:35:31.222198       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1003 19:35:31.222256       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1003 19:35:31.222272       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1003 19:35:31.222328       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1003 19:35:31.222344       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1003 19:35:31.222404       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1003 19:35:31.222418       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1003 19:35:32.035263       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1003 19:35:32.035395       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1003 19:35:32.176339       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1003 19:35:32.176460       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1003 19:35:32.183807       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1003 19:35:32.183914       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1003 19:35:32.210815       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1003 19:35:32.210850       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1003 19:35:32.220353       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1003 19:35:32.220463       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1003 19:35:32.298362       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1003 19:35:32.298465       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1003 19:35:32.360428       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1003 19:35:32.360469       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1003 19:35:32.392907       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1003 19:35:32.393050       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1003 19:35:34.305601       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 03 19:35:47 old-k8s-version-174543 kubelet[1381]: I1003 19:35:47.540928    1381 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/50d549bb-e122-45af-8dad-b599f07053fd-kube-proxy\") pod \"kube-proxy-v4mqk\" (UID: \"50d549bb-e122-45af-8dad-b599f07053fd\") " pod="kube-system/kube-proxy-v4mqk"
	Oct 03 19:35:47 old-k8s-version-174543 kubelet[1381]: I1003 19:35:47.540988    1381 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/3cc7fea5-9441-4250-80b2-05aff82ce727-cni-cfg\") pod \"kindnet-rwdd6\" (UID: \"3cc7fea5-9441-4250-80b2-05aff82ce727\") " pod="kube-system/kindnet-rwdd6"
	Oct 03 19:35:47 old-k8s-version-174543 kubelet[1381]: I1003 19:35:47.541012    1381 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3cc7fea5-9441-4250-80b2-05aff82ce727-xtables-lock\") pod \"kindnet-rwdd6\" (UID: \"3cc7fea5-9441-4250-80b2-05aff82ce727\") " pod="kube-system/kindnet-rwdd6"
	Oct 03 19:35:47 old-k8s-version-174543 kubelet[1381]: I1003 19:35:47.541049    1381 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4t7g\" (UniqueName: \"kubernetes.io/projected/50d549bb-e122-45af-8dad-b599f07053fd-kube-api-access-z4t7g\") pod \"kube-proxy-v4mqk\" (UID: \"50d549bb-e122-45af-8dad-b599f07053fd\") " pod="kube-system/kube-proxy-v4mqk"
	Oct 03 19:35:47 old-k8s-version-174543 kubelet[1381]: I1003 19:35:47.541081    1381 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3cc7fea5-9441-4250-80b2-05aff82ce727-lib-modules\") pod \"kindnet-rwdd6\" (UID: \"3cc7fea5-9441-4250-80b2-05aff82ce727\") " pod="kube-system/kindnet-rwdd6"
	Oct 03 19:35:47 old-k8s-version-174543 kubelet[1381]: I1003 19:35:47.541139    1381 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zl6sk\" (UniqueName: \"kubernetes.io/projected/3cc7fea5-9441-4250-80b2-05aff82ce727-kube-api-access-zl6sk\") pod \"kindnet-rwdd6\" (UID: \"3cc7fea5-9441-4250-80b2-05aff82ce727\") " pod="kube-system/kindnet-rwdd6"
	Oct 03 19:35:47 old-k8s-version-174543 kubelet[1381]: I1003 19:35:47.541166    1381 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/50d549bb-e122-45af-8dad-b599f07053fd-lib-modules\") pod \"kube-proxy-v4mqk\" (UID: \"50d549bb-e122-45af-8dad-b599f07053fd\") " pod="kube-system/kube-proxy-v4mqk"
	Oct 03 19:35:47 old-k8s-version-174543 kubelet[1381]: I1003 19:35:47.541217    1381 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/50d549bb-e122-45af-8dad-b599f07053fd-xtables-lock\") pod \"kube-proxy-v4mqk\" (UID: \"50d549bb-e122-45af-8dad-b599f07053fd\") " pod="kube-system/kube-proxy-v4mqk"
	Oct 03 19:35:47 old-k8s-version-174543 kubelet[1381]: W1003 19:35:47.789779    1381 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/e396cf711cf72d67a3eb0308bfe582b67073d4549b3bd8af7083d99767f74cff/crio-0a117685a3abd8d306a0d12e511b6ead19f0e7f01d0c91bf3dde3a9a123e76b0 WatchSource:0}: Error finding container 0a117685a3abd8d306a0d12e511b6ead19f0e7f01d0c91bf3dde3a9a123e76b0: Status 404 returned error can't find the container with id 0a117685a3abd8d306a0d12e511b6ead19f0e7f01d0c91bf3dde3a9a123e76b0
	Oct 03 19:35:48 old-k8s-version-174543 kubelet[1381]: I1003 19:35:48.608425    1381 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-v4mqk" podStartSLOduration=1.608371032 podCreationTimestamp="2025-10-03 19:35:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-03 19:35:48.608128321 +0000 UTC m=+14.366537444" watchObservedRunningTime="2025-10-03 19:35:48.608371032 +0000 UTC m=+14.366780147"
	Oct 03 19:35:54 old-k8s-version-174543 kubelet[1381]: I1003 19:35:54.471795    1381 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-rwdd6" podStartSLOduration=4.199627646 podCreationTimestamp="2025-10-03 19:35:47 +0000 UTC" firstStartedPulling="2025-10-03 19:35:47.819356282 +0000 UTC m=+13.577765397" lastFinishedPulling="2025-10-03 19:35:51.091465135 +0000 UTC m=+16.849874258" observedRunningTime="2025-10-03 19:35:51.632662005 +0000 UTC m=+17.391071136" watchObservedRunningTime="2025-10-03 19:35:54.471736507 +0000 UTC m=+20.230145654"
	Oct 03 19:36:01 old-k8s-version-174543 kubelet[1381]: I1003 19:36:01.695355    1381 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Oct 03 19:36:01 old-k8s-version-174543 kubelet[1381]: I1003 19:36:01.729377    1381 topology_manager.go:215] "Topology Admit Handler" podUID="678e0c98-f42a-4a69-8d50-a83a82886a69" podNamespace="kube-system" podName="coredns-5dd5756b68-6grkm"
	Oct 03 19:36:01 old-k8s-version-174543 kubelet[1381]: I1003 19:36:01.735558    1381 topology_manager.go:215] "Topology Admit Handler" podUID="8db23fd8-6872-4901-b61f-a88ac26407a7" podNamespace="kube-system" podName="storage-provisioner"
	Oct 03 19:36:01 old-k8s-version-174543 kubelet[1381]: I1003 19:36:01.859062    1381 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9w8jh\" (UniqueName: \"kubernetes.io/projected/678e0c98-f42a-4a69-8d50-a83a82886a69-kube-api-access-9w8jh\") pod \"coredns-5dd5756b68-6grkm\" (UID: \"678e0c98-f42a-4a69-8d50-a83a82886a69\") " pod="kube-system/coredns-5dd5756b68-6grkm"
	Oct 03 19:36:01 old-k8s-version-174543 kubelet[1381]: I1003 19:36:01.859126    1381 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/678e0c98-f42a-4a69-8d50-a83a82886a69-config-volume\") pod \"coredns-5dd5756b68-6grkm\" (UID: \"678e0c98-f42a-4a69-8d50-a83a82886a69\") " pod="kube-system/coredns-5dd5756b68-6grkm"
	Oct 03 19:36:01 old-k8s-version-174543 kubelet[1381]: I1003 19:36:01.859158    1381 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/8db23fd8-6872-4901-b61f-a88ac26407a7-tmp\") pod \"storage-provisioner\" (UID: \"8db23fd8-6872-4901-b61f-a88ac26407a7\") " pod="kube-system/storage-provisioner"
	Oct 03 19:36:01 old-k8s-version-174543 kubelet[1381]: I1003 19:36:01.859183    1381 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qxcj\" (UniqueName: \"kubernetes.io/projected/8db23fd8-6872-4901-b61f-a88ac26407a7-kube-api-access-5qxcj\") pod \"storage-provisioner\" (UID: \"8db23fd8-6872-4901-b61f-a88ac26407a7\") " pod="kube-system/storage-provisioner"
	Oct 03 19:36:02 old-k8s-version-174543 kubelet[1381]: W1003 19:36:02.054701    1381 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/e396cf711cf72d67a3eb0308bfe582b67073d4549b3bd8af7083d99767f74cff/crio-cea7d58e11f05e9df47620fc7735ff6ae27bdfd4972ace7127f30ce6ab92f9e9 WatchSource:0}: Error finding container cea7d58e11f05e9df47620fc7735ff6ae27bdfd4972ace7127f30ce6ab92f9e9: Status 404 returned error can't find the container with id cea7d58e11f05e9df47620fc7735ff6ae27bdfd4972ace7127f30ce6ab92f9e9
	Oct 03 19:36:02 old-k8s-version-174543 kubelet[1381]: W1003 19:36:02.061431    1381 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/e396cf711cf72d67a3eb0308bfe582b67073d4549b3bd8af7083d99767f74cff/crio-ccc29d343c9629ee7cb07baf43630d4839b6b787a6247e5f4a5d6b8069d9dbeb WatchSource:0}: Error finding container ccc29d343c9629ee7cb07baf43630d4839b6b787a6247e5f4a5d6b8069d9dbeb: Status 404 returned error can't find the container with id ccc29d343c9629ee7cb07baf43630d4839b6b787a6247e5f4a5d6b8069d9dbeb
	Oct 03 19:36:02 old-k8s-version-174543 kubelet[1381]: I1003 19:36:02.665497    1381 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-6grkm" podStartSLOduration=15.665447399 podCreationTimestamp="2025-10-03 19:35:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-03 19:36:02.663707263 +0000 UTC m=+28.422116386" watchObservedRunningTime="2025-10-03 19:36:02.665447399 +0000 UTC m=+28.423856514"
	Oct 03 19:36:02 old-k8s-version-174543 kubelet[1381]: I1003 19:36:02.665628    1381 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.665596184 podCreationTimestamp="2025-10-03 19:35:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-03 19:36:02.650657727 +0000 UTC m=+28.409066842" watchObservedRunningTime="2025-10-03 19:36:02.665596184 +0000 UTC m=+28.424005308"
	Oct 03 19:36:04 old-k8s-version-174543 kubelet[1381]: I1003 19:36:04.936801    1381 topology_manager.go:215] "Topology Admit Handler" podUID="59ac2e32-e58b-4476-9428-b0694f51e499" podNamespace="default" podName="busybox"
	Oct 03 19:36:04 old-k8s-version-174543 kubelet[1381]: I1003 19:36:04.975317    1381 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jz2cm\" (UniqueName: \"kubernetes.io/projected/59ac2e32-e58b-4476-9428-b0694f51e499-kube-api-access-jz2cm\") pod \"busybox\" (UID: \"59ac2e32-e58b-4476-9428-b0694f51e499\") " pod="default/busybox"
	Oct 03 19:36:05 old-k8s-version-174543 kubelet[1381]: W1003 19:36:05.260281    1381 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/e396cf711cf72d67a3eb0308bfe582b67073d4549b3bd8af7083d99767f74cff/crio-05a1d399d09e8c55d7d181488da5c48e068e0ace6e0d41c567f10c59e1f3f92f WatchSource:0}: Error finding container 05a1d399d09e8c55d7d181488da5c48e068e0ace6e0d41c567f10c59e1f3f92f: Status 404 returned error can't find the container with id 05a1d399d09e8c55d7d181488da5c48e068e0ace6e0d41c567f10c59e1f3f92f
	
	
	==> storage-provisioner [0dc66974bbd3824ab6b2642eca80ab327a8b164267be3a3796967c884a166d89] <==
	I1003 19:36:02.118195       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1003 19:36:02.149188       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1003 19:36:02.149591       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1003 19:36:02.159174       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1003 19:36:02.159438       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-174543_54d1a492-852b-45b6-9741-ec6634e4491d!
	I1003 19:36:02.163684       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"dad5d048-770e-49bf-b234-9f07728495ef", APIVersion:"v1", ResourceVersion:"407", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-174543_54d1a492-852b-45b6-9741-ec6634e4491d became leader
	I1003 19:36:02.262997       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-174543_54d1a492-852b-45b6-9741-ec6634e4491d!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-174543 -n old-k8s-version-174543
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-174543 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (4.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (8.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-174543 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p old-k8s-version-174543 --alsologtostderr -v=1: exit status 80 (2.4491385s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-174543 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 19:37:41.941167  475193 out.go:360] Setting OutFile to fd 1 ...
	I1003 19:37:41.941375  475193 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 19:37:41.941399  475193 out.go:374] Setting ErrFile to fd 2...
	I1003 19:37:41.941421  475193 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 19:37:41.941872  475193 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-284583/.minikube/bin
	I1003 19:37:41.942270  475193 out.go:368] Setting JSON to false
	I1003 19:37:41.942330  475193 mustload.go:65] Loading cluster: old-k8s-version-174543
	I1003 19:37:41.943161  475193 config.go:182] Loaded profile config "old-k8s-version-174543": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1003 19:37:41.943978  475193 cli_runner.go:164] Run: docker container inspect old-k8s-version-174543 --format={{.State.Status}}
	I1003 19:37:41.963056  475193 host.go:66] Checking if "old-k8s-version-174543" exists ...
	I1003 19:37:41.963378  475193 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 19:37:42.031564  475193 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-03 19:37:42.013797944 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1003 19:37:42.032895  475193 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-174543 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1003 19:37:42.038207  475193 out.go:179] * Pausing node old-k8s-version-174543 ... 
	I1003 19:37:42.041161  475193 host.go:66] Checking if "old-k8s-version-174543" exists ...
	I1003 19:37:42.041521  475193 ssh_runner.go:195] Run: systemctl --version
	I1003 19:37:42.041579  475193 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-174543
	I1003 19:37:42.059444  475193 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/old-k8s-version-174543/id_rsa Username:docker}
	I1003 19:37:42.161033  475193 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 19:37:42.177090  475193 pause.go:51] kubelet running: true
	I1003 19:37:42.177249  475193 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1003 19:37:42.439940  475193 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1003 19:37:42.440033  475193 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1003 19:37:42.519256  475193 cri.go:89] found id: "299e25627798dd200810afddc280b9b6853cae4ac0ac3aba81703a80b719f759"
	I1003 19:37:42.519276  475193 cri.go:89] found id: "edf79b93e4b38e2ee91c81e9e314756148e9674922f93889028ee8c7ecc4ef9d"
	I1003 19:37:42.519281  475193 cri.go:89] found id: "ed93641b7305ecc78cf05b71981a9b30e56f9dd16df2e6eb2b65f4cc3ef9c10b"
	I1003 19:37:42.519284  475193 cri.go:89] found id: "b0164ebd7fa623d22d654d8c31fba34f430360c496ed08d6a01ebbe6ad7fa8fd"
	I1003 19:37:42.519287  475193 cri.go:89] found id: "07e35fb642fb1060de6f5b6fe3a20dcbf4caddf1bf2630c89f54858a905f5d85"
	I1003 19:37:42.519291  475193 cri.go:89] found id: "9d777d7ca3f3aae2a67724d1a6f8ab7dbc9844b33527c107ab163508dd940d95"
	I1003 19:37:42.519294  475193 cri.go:89] found id: "fc8be4f0125f487dca2dc76dd1220ac22ffcd4a1e02920fcc8ee321799717ac2"
	I1003 19:37:42.519297  475193 cri.go:89] found id: "5178fc63373a85b7ab0aa3b1194bd3b13ba6e413c7f9fcf141e7a055caeea3d9"
	I1003 19:37:42.519300  475193 cri.go:89] found id: "62ef8d10feba1f56202dc665fa46660c227322fdddf49c3e984ffb9430f54164"
	I1003 19:37:42.519308  475193 cri.go:89] found id: "c2d2e81f1c95c24f945e4ca4a6f6e6308d203a2030802e620a0adb06b519a7d2"
	I1003 19:37:42.519311  475193 cri.go:89] found id: "d250f6446c88cc68c5a3d4d9876c5bdef89e65ab6fd74df4fbd79456c956c5d8"
	I1003 19:37:42.519315  475193 cri.go:89] found id: ""
	I1003 19:37:42.519370  475193 ssh_runner.go:195] Run: sudo runc list -f json
	I1003 19:37:42.538564  475193 retry.go:31] will retry after 332.74631ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-03T19:37:42Z" level=error msg="open /run/runc: no such file or directory"
	I1003 19:37:42.872166  475193 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 19:37:42.885244  475193 pause.go:51] kubelet running: false
	I1003 19:37:42.885313  475193 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1003 19:37:43.044797  475193 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1003 19:37:43.044884  475193 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1003 19:37:43.115116  475193 cri.go:89] found id: "299e25627798dd200810afddc280b9b6853cae4ac0ac3aba81703a80b719f759"
	I1003 19:37:43.115182  475193 cri.go:89] found id: "edf79b93e4b38e2ee91c81e9e314756148e9674922f93889028ee8c7ecc4ef9d"
	I1003 19:37:43.115194  475193 cri.go:89] found id: "ed93641b7305ecc78cf05b71981a9b30e56f9dd16df2e6eb2b65f4cc3ef9c10b"
	I1003 19:37:43.115199  475193 cri.go:89] found id: "b0164ebd7fa623d22d654d8c31fba34f430360c496ed08d6a01ebbe6ad7fa8fd"
	I1003 19:37:43.115202  475193 cri.go:89] found id: "07e35fb642fb1060de6f5b6fe3a20dcbf4caddf1bf2630c89f54858a905f5d85"
	I1003 19:37:43.115205  475193 cri.go:89] found id: "9d777d7ca3f3aae2a67724d1a6f8ab7dbc9844b33527c107ab163508dd940d95"
	I1003 19:37:43.115208  475193 cri.go:89] found id: "fc8be4f0125f487dca2dc76dd1220ac22ffcd4a1e02920fcc8ee321799717ac2"
	I1003 19:37:43.115212  475193 cri.go:89] found id: "5178fc63373a85b7ab0aa3b1194bd3b13ba6e413c7f9fcf141e7a055caeea3d9"
	I1003 19:37:43.115215  475193 cri.go:89] found id: "62ef8d10feba1f56202dc665fa46660c227322fdddf49c3e984ffb9430f54164"
	I1003 19:37:43.115221  475193 cri.go:89] found id: "c2d2e81f1c95c24f945e4ca4a6f6e6308d203a2030802e620a0adb06b519a7d2"
	I1003 19:37:43.115224  475193 cri.go:89] found id: "d250f6446c88cc68c5a3d4d9876c5bdef89e65ab6fd74df4fbd79456c956c5d8"
	I1003 19:37:43.115227  475193 cri.go:89] found id: ""
	I1003 19:37:43.115284  475193 ssh_runner.go:195] Run: sudo runc list -f json
	I1003 19:37:43.127241  475193 retry.go:31] will retry after 280.196652ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-03T19:37:43Z" level=error msg="open /run/runc: no such file or directory"
	I1003 19:37:43.407676  475193 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 19:37:43.421601  475193 pause.go:51] kubelet running: false
	I1003 19:37:43.421694  475193 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1003 19:37:43.590338  475193 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1003 19:37:43.590461  475193 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1003 19:37:43.664719  475193 cri.go:89] found id: "299e25627798dd200810afddc280b9b6853cae4ac0ac3aba81703a80b719f759"
	I1003 19:37:43.664778  475193 cri.go:89] found id: "edf79b93e4b38e2ee91c81e9e314756148e9674922f93889028ee8c7ecc4ef9d"
	I1003 19:37:43.664783  475193 cri.go:89] found id: "ed93641b7305ecc78cf05b71981a9b30e56f9dd16df2e6eb2b65f4cc3ef9c10b"
	I1003 19:37:43.664790  475193 cri.go:89] found id: "b0164ebd7fa623d22d654d8c31fba34f430360c496ed08d6a01ebbe6ad7fa8fd"
	I1003 19:37:43.664794  475193 cri.go:89] found id: "07e35fb642fb1060de6f5b6fe3a20dcbf4caddf1bf2630c89f54858a905f5d85"
	I1003 19:37:43.664798  475193 cri.go:89] found id: "9d777d7ca3f3aae2a67724d1a6f8ab7dbc9844b33527c107ab163508dd940d95"
	I1003 19:37:43.664818  475193 cri.go:89] found id: "fc8be4f0125f487dca2dc76dd1220ac22ffcd4a1e02920fcc8ee321799717ac2"
	I1003 19:37:43.664822  475193 cri.go:89] found id: "5178fc63373a85b7ab0aa3b1194bd3b13ba6e413c7f9fcf141e7a055caeea3d9"
	I1003 19:37:43.664826  475193 cri.go:89] found id: "62ef8d10feba1f56202dc665fa46660c227322fdddf49c3e984ffb9430f54164"
	I1003 19:37:43.664832  475193 cri.go:89] found id: "c2d2e81f1c95c24f945e4ca4a6f6e6308d203a2030802e620a0adb06b519a7d2"
	I1003 19:37:43.664843  475193 cri.go:89] found id: "d250f6446c88cc68c5a3d4d9876c5bdef89e65ab6fd74df4fbd79456c956c5d8"
	I1003 19:37:43.664846  475193 cri.go:89] found id: ""
	I1003 19:37:43.664929  475193 ssh_runner.go:195] Run: sudo runc list -f json
	I1003 19:37:43.676191  475193 retry.go:31] will retry after 377.060401ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-03T19:37:43Z" level=error msg="open /run/runc: no such file or directory"
	I1003 19:37:44.053712  475193 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 19:37:44.066917  475193 pause.go:51] kubelet running: false
	I1003 19:37:44.066986  475193 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1003 19:37:44.244027  475193 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1003 19:37:44.244113  475193 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1003 19:37:44.308898  475193 cri.go:89] found id: "299e25627798dd200810afddc280b9b6853cae4ac0ac3aba81703a80b719f759"
	I1003 19:37:44.308927  475193 cri.go:89] found id: "edf79b93e4b38e2ee91c81e9e314756148e9674922f93889028ee8c7ecc4ef9d"
	I1003 19:37:44.308931  475193 cri.go:89] found id: "ed93641b7305ecc78cf05b71981a9b30e56f9dd16df2e6eb2b65f4cc3ef9c10b"
	I1003 19:37:44.308935  475193 cri.go:89] found id: "b0164ebd7fa623d22d654d8c31fba34f430360c496ed08d6a01ebbe6ad7fa8fd"
	I1003 19:37:44.308939  475193 cri.go:89] found id: "07e35fb642fb1060de6f5b6fe3a20dcbf4caddf1bf2630c89f54858a905f5d85"
	I1003 19:37:44.308943  475193 cri.go:89] found id: "9d777d7ca3f3aae2a67724d1a6f8ab7dbc9844b33527c107ab163508dd940d95"
	I1003 19:37:44.308946  475193 cri.go:89] found id: "fc8be4f0125f487dca2dc76dd1220ac22ffcd4a1e02920fcc8ee321799717ac2"
	I1003 19:37:44.308950  475193 cri.go:89] found id: "5178fc63373a85b7ab0aa3b1194bd3b13ba6e413c7f9fcf141e7a055caeea3d9"
	I1003 19:37:44.308953  475193 cri.go:89] found id: "62ef8d10feba1f56202dc665fa46660c227322fdddf49c3e984ffb9430f54164"
	I1003 19:37:44.308960  475193 cri.go:89] found id: "c2d2e81f1c95c24f945e4ca4a6f6e6308d203a2030802e620a0adb06b519a7d2"
	I1003 19:37:44.308963  475193 cri.go:89] found id: "d250f6446c88cc68c5a3d4d9876c5bdef89e65ab6fd74df4fbd79456c956c5d8"
	I1003 19:37:44.308966  475193 cri.go:89] found id: ""
	I1003 19:37:44.309014  475193 ssh_runner.go:195] Run: sudo runc list -f json
	I1003 19:37:44.323148  475193 out.go:203] 
	W1003 19:37:44.326167  475193 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-03T19:37:44Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-03T19:37:44Z" level=error msg="open /run/runc: no such file or directory"
	
	W1003 19:37:44.326222  475193 out.go:285] * 
	* 
	W1003 19:37:44.333456  475193 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 19:37:44.336571  475193 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p old-k8s-version-174543 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-174543
helpers_test.go:243: (dbg) docker inspect old-k8s-version-174543:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e396cf711cf72d67a3eb0308bfe582b67073d4549b3bd8af7083d99767f74cff",
	        "Created": "2025-10-03T19:35:07.94543535Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 470976,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-03T19:36:30.595188821Z",
	            "FinishedAt": "2025-10-03T19:36:29.589392196Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/e396cf711cf72d67a3eb0308bfe582b67073d4549b3bd8af7083d99767f74cff/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e396cf711cf72d67a3eb0308bfe582b67073d4549b3bd8af7083d99767f74cff/hostname",
	        "HostsPath": "/var/lib/docker/containers/e396cf711cf72d67a3eb0308bfe582b67073d4549b3bd8af7083d99767f74cff/hosts",
	        "LogPath": "/var/lib/docker/containers/e396cf711cf72d67a3eb0308bfe582b67073d4549b3bd8af7083d99767f74cff/e396cf711cf72d67a3eb0308bfe582b67073d4549b3bd8af7083d99767f74cff-json.log",
	        "Name": "/old-k8s-version-174543",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-174543:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-174543",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e396cf711cf72d67a3eb0308bfe582b67073d4549b3bd8af7083d99767f74cff",
	                "LowerDir": "/var/lib/docker/overlay2/48f8d5487aa8e63c3522dc4412a644c246929812a11cb3ecb803638938d2de80-init/diff:/var/lib/docker/overlay2/87b205803817b0b71a214d995ab7e10a92033bbf72d76d6e052f1d21ccecb313/diff",
	                "MergedDir": "/var/lib/docker/overlay2/48f8d5487aa8e63c3522dc4412a644c246929812a11cb3ecb803638938d2de80/merged",
	                "UpperDir": "/var/lib/docker/overlay2/48f8d5487aa8e63c3522dc4412a644c246929812a11cb3ecb803638938d2de80/diff",
	                "WorkDir": "/var/lib/docker/overlay2/48f8d5487aa8e63c3522dc4412a644c246929812a11cb3ecb803638938d2de80/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-174543",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-174543/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-174543",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-174543",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-174543",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b24d003fd1caf21e4e07e675a9d2114babca3dd3bb9e5a164b5dbd0f97c5baf9",
	            "SandboxKey": "/var/run/docker/netns/b24d003fd1ca",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33428"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33429"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33432"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33430"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33431"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-174543": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f2:68:ca:40:c1:7e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "002964c2ebf4675c3eed6a35959bca86f080d98023eaf2d830eb21475b5fd360",
	                    "EndpointID": "4b452d495b368ceeda75fdbfb658d632c2f7c01d6f152df2b1f0e3789e647080",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-174543",
	                        "e396cf711cf7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-174543 -n old-k8s-version-174543
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-174543 -n old-k8s-version-174543: exit status 2 (351.096228ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-174543 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-174543 logs -n 25: (1.885564273s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-388132 sudo containerd config dump                                                                                                                                                                                                  │ cilium-388132             │ jenkins │ v1.37.0 │ 03 Oct 25 19:25 UTC │                     │
	│ ssh     │ -p cilium-388132 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-388132             │ jenkins │ v1.37.0 │ 03 Oct 25 19:25 UTC │                     │
	│ ssh     │ -p cilium-388132 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-388132             │ jenkins │ v1.37.0 │ 03 Oct 25 19:25 UTC │                     │
	│ ssh     │ -p cilium-388132 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-388132             │ jenkins │ v1.37.0 │ 03 Oct 25 19:25 UTC │                     │
	│ ssh     │ -p cilium-388132 sudo crio config                                                                                                                                                                                                             │ cilium-388132             │ jenkins │ v1.37.0 │ 03 Oct 25 19:25 UTC │                     │
	│ delete  │ -p cilium-388132                                                                                                                                                                                                                              │ cilium-388132             │ jenkins │ v1.37.0 │ 03 Oct 25 19:25 UTC │ 03 Oct 25 19:25 UTC │
	│ start   │ -p force-systemd-env-159095 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-159095  │ jenkins │ v1.37.0 │ 03 Oct 25 19:25 UTC │                     │
	│ ssh     │ force-systemd-flag-855981 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-855981 │ jenkins │ v1.37.0 │ 03 Oct 25 19:32 UTC │ 03 Oct 25 19:32 UTC │
	│ delete  │ -p force-systemd-flag-855981                                                                                                                                                                                                                  │ force-systemd-flag-855981 │ jenkins │ v1.37.0 │ 03 Oct 25 19:32 UTC │ 03 Oct 25 19:32 UTC │
	│ start   │ -p cert-expiration-324520 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-324520    │ jenkins │ v1.37.0 │ 03 Oct 25 19:32 UTC │ 03 Oct 25 19:33 UTC │
	│ delete  │ -p force-systemd-env-159095                                                                                                                                                                                                                   │ force-systemd-env-159095  │ jenkins │ v1.37.0 │ 03 Oct 25 19:34 UTC │ 03 Oct 25 19:34 UTC │
	│ start   │ -p cert-options-305866 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-305866       │ jenkins │ v1.37.0 │ 03 Oct 25 19:34 UTC │ 03 Oct 25 19:34 UTC │
	│ ssh     │ cert-options-305866 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-305866       │ jenkins │ v1.37.0 │ 03 Oct 25 19:34 UTC │ 03 Oct 25 19:34 UTC │
	│ ssh     │ -p cert-options-305866 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-305866       │ jenkins │ v1.37.0 │ 03 Oct 25 19:34 UTC │ 03 Oct 25 19:34 UTC │
	│ delete  │ -p cert-options-305866                                                                                                                                                                                                                        │ cert-options-305866       │ jenkins │ v1.37.0 │ 03 Oct 25 19:34 UTC │ 03 Oct 25 19:35 UTC │
	│ start   │ -p old-k8s-version-174543 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-174543    │ jenkins │ v1.37.0 │ 03 Oct 25 19:35 UTC │ 03 Oct 25 19:36 UTC │
	│ start   │ -p cert-expiration-324520 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-324520    │ jenkins │ v1.37.0 │ 03 Oct 25 19:36 UTC │ 03 Oct 25 19:36 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-174543 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-174543    │ jenkins │ v1.37.0 │ 03 Oct 25 19:36 UTC │                     │
	│ stop    │ -p old-k8s-version-174543 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-174543    │ jenkins │ v1.37.0 │ 03 Oct 25 19:36 UTC │ 03 Oct 25 19:36 UTC │
	│ delete  │ -p cert-expiration-324520                                                                                                                                                                                                                     │ cert-expiration-324520    │ jenkins │ v1.37.0 │ 03 Oct 25 19:36 UTC │ 03 Oct 25 19:36 UTC │
	│ start   │ -p no-preload-643397 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-643397         │ jenkins │ v1.37.0 │ 03 Oct 25 19:36 UTC │ 03 Oct 25 19:37 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-174543 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-174543    │ jenkins │ v1.37.0 │ 03 Oct 25 19:36 UTC │ 03 Oct 25 19:36 UTC │
	│ start   │ -p old-k8s-version-174543 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-174543    │ jenkins │ v1.37.0 │ 03 Oct 25 19:36 UTC │ 03 Oct 25 19:37 UTC │
	│ image   │ old-k8s-version-174543 image list --format=json                                                                                                                                                                                               │ old-k8s-version-174543    │ jenkins │ v1.37.0 │ 03 Oct 25 19:37 UTC │ 03 Oct 25 19:37 UTC │
	│ pause   │ -p old-k8s-version-174543 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-174543    │ jenkins │ v1.37.0 │ 03 Oct 25 19:37 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/03 19:36:30
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 19:36:30.250303  470831 out.go:360] Setting OutFile to fd 1 ...
	I1003 19:36:30.250494  470831 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 19:36:30.250523  470831 out.go:374] Setting ErrFile to fd 2...
	I1003 19:36:30.250546  470831 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 19:36:30.250819  470831 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-284583/.minikube/bin
	I1003 19:36:30.251259  470831 out.go:368] Setting JSON to false
	I1003 19:36:30.252174  470831 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8342,"bootTime":1759511849,"procs":166,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1003 19:36:30.252267  470831 start.go:140] virtualization:  
	I1003 19:36:30.257178  470831 out.go:179] * [old-k8s-version-174543] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1003 19:36:30.260325  470831 out.go:179]   - MINIKUBE_LOCATION=21625
	I1003 19:36:30.260401  470831 notify.go:220] Checking for updates...
	I1003 19:36:30.267120  470831 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 19:36:30.270199  470831 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21625-284583/kubeconfig
	I1003 19:36:30.276956  470831 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-284583/.minikube
	I1003 19:36:30.279893  470831 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1003 19:36:30.282916  470831 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 19:36:30.286374  470831 config.go:182] Loaded profile config "old-k8s-version-174543": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1003 19:36:30.289864  470831 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1003 19:36:30.292678  470831 driver.go:421] Setting default libvirt URI to qemu:///system
	I1003 19:36:30.336883  470831 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1003 19:36:30.337040  470831 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 19:36:30.414358  470831 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:46 OomKillDisable:true NGoroutines:60 SystemTime:2025-10-03 19:36:30.404346993 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1003 19:36:30.414469  470831 docker.go:318] overlay module found
	I1003 19:36:30.417827  470831 out.go:179] * Using the docker driver based on existing profile
	I1003 19:36:30.420720  470831 start.go:304] selected driver: docker
	I1003 19:36:30.420758  470831 start.go:924] validating driver "docker" against &{Name:old-k8s-version-174543 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-174543 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 19:36:30.420853  470831 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 19:36:30.421578  470831 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 19:36:30.506943  470831 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:46 OomKillDisable:true NGoroutines:60 SystemTime:2025-10-03 19:36:30.493477103 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1003 19:36:30.507327  470831 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 19:36:30.507368  470831 cni.go:84] Creating CNI manager for ""
	I1003 19:36:30.507434  470831 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 19:36:30.507477  470831 start.go:348] cluster config:
	{Name:old-k8s-version-174543 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-174543 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 19:36:30.510812  470831 out.go:179] * Starting "old-k8s-version-174543" primary control-plane node in "old-k8s-version-174543" cluster
	I1003 19:36:30.513670  470831 cache.go:123] Beginning downloading kic base image for docker with crio
	I1003 19:36:30.516637  470831 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1003 19:36:30.519439  470831 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1003 19:36:30.519507  470831 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21625-284583/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1003 19:36:30.519517  470831 cache.go:58] Caching tarball of preloaded images
	I1003 19:36:30.519513  470831 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1003 19:36:30.519599  470831 preload.go:233] Found /home/jenkins/minikube-integration/21625-284583/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1003 19:36:30.519608  470831 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1003 19:36:30.519724  470831 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/old-k8s-version-174543/config.json ...
	I1003 19:36:30.540975  470831 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1003 19:36:30.540996  470831 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1003 19:36:30.541009  470831 cache.go:232] Successfully downloaded all kic artifacts
	I1003 19:36:30.541031  470831 start.go:360] acquireMachinesLock for old-k8s-version-174543: {Name:mk19048ea0453627d87a673cd3a2fbc4574461a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 19:36:30.541081  470831 start.go:364] duration metric: took 34.183µs to acquireMachinesLock for "old-k8s-version-174543"
	I1003 19:36:30.541100  470831 start.go:96] Skipping create...Using existing machine configuration
	I1003 19:36:30.541105  470831 fix.go:54] fixHost starting: 
	I1003 19:36:30.541364  470831 cli_runner.go:164] Run: docker container inspect old-k8s-version-174543 --format={{.State.Status}}
	I1003 19:36:30.557751  470831 fix.go:112] recreateIfNeeded on old-k8s-version-174543: state=Stopped err=<nil>
	W1003 19:36:30.557780  470831 fix.go:138] unexpected machine state, will restart: <nil>
	I1003 19:36:29.888287  469677 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-643397
	
	I1003 19:36:29.888312  469677 ubuntu.go:182] provisioning hostname "no-preload-643397"
	I1003 19:36:29.888373  469677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-643397
	I1003 19:36:29.911157  469677 main.go:141] libmachine: Using SSH client type: native
	I1003 19:36:29.911451  469677 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1003 19:36:29.911465  469677 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-643397 && echo "no-preload-643397" | sudo tee /etc/hostname
	I1003 19:36:30.097224  469677 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-643397
	
	I1003 19:36:30.097314  469677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-643397
	I1003 19:36:30.129074  469677 main.go:141] libmachine: Using SSH client type: native
	I1003 19:36:30.129399  469677 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1003 19:36:30.129417  469677 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-643397' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-643397/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-643397' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 19:36:30.275239  469677 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 19:36:30.275263  469677 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21625-284583/.minikube CaCertPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21625-284583/.minikube}
	I1003 19:36:30.275285  469677 ubuntu.go:190] setting up certificates
	I1003 19:36:30.275296  469677 provision.go:84] configureAuth start
	I1003 19:36:30.275356  469677 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-643397
	I1003 19:36:30.296110  469677 provision.go:143] copyHostCerts
	I1003 19:36:30.296190  469677 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-284583/.minikube/ca.pem, removing ...
	I1003 19:36:30.296200  469677 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-284583/.minikube/ca.pem
	I1003 19:36:30.296284  469677 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21625-284583/.minikube/ca.pem (1082 bytes)
	I1003 19:36:30.296395  469677 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-284583/.minikube/cert.pem, removing ...
	I1003 19:36:30.296404  469677 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-284583/.minikube/cert.pem
	I1003 19:36:30.296438  469677 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21625-284583/.minikube/cert.pem (1123 bytes)
	I1003 19:36:30.296491  469677 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-284583/.minikube/key.pem, removing ...
	I1003 19:36:30.296496  469677 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-284583/.minikube/key.pem
	I1003 19:36:30.296519  469677 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21625-284583/.minikube/key.pem (1675 bytes)
	I1003 19:36:30.296573  469677 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21625-284583/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca-key.pem org=jenkins.no-preload-643397 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-643397]
	I1003 19:36:31.243632  469677 provision.go:177] copyRemoteCerts
	I1003 19:36:31.243707  469677 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 19:36:31.243750  469677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-643397
	I1003 19:36:31.265968  469677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/no-preload-643397/id_rsa Username:docker}
	I1003 19:36:31.367118  469677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 19:36:31.394435  469677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1003 19:36:31.426437  469677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1003 19:36:31.460100  469677 provision.go:87] duration metric: took 1.18478156s to configureAuth
	I1003 19:36:31.460175  469677 ubuntu.go:206] setting minikube options for container-runtime
	I1003 19:36:31.460399  469677 config.go:182] Loaded profile config "no-preload-643397": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 19:36:31.460582  469677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-643397
	I1003 19:36:31.483776  469677 main.go:141] libmachine: Using SSH client type: native
	I1003 19:36:31.484112  469677 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1003 19:36:31.484128  469677 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1003 19:36:31.741630  469677 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1003 19:36:31.741713  469677 machine.go:96] duration metric: took 5.061104012s to provisionDockerMachine
	I1003 19:36:31.741739  469677 client.go:171] duration metric: took 6.85414651s to LocalClient.Create
	I1003 19:36:31.741791  469677 start.go:167] duration metric: took 6.854271353s to libmachine.API.Create "no-preload-643397"
	I1003 19:36:31.741850  469677 start.go:293] postStartSetup for "no-preload-643397" (driver="docker")
	I1003 19:36:31.741878  469677 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 19:36:31.741973  469677 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 19:36:31.742040  469677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-643397
	I1003 19:36:31.759621  469677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/no-preload-643397/id_rsa Username:docker}
	I1003 19:36:31.856950  469677 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 19:36:31.860016  469677 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1003 19:36:31.860050  469677 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1003 19:36:31.860061  469677 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-284583/.minikube/addons for local assets ...
	I1003 19:36:31.860115  469677 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-284583/.minikube/files for local assets ...
	I1003 19:36:31.860195  469677 filesync.go:149] local asset: /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem -> 2864342.pem in /etc/ssl/certs
	I1003 19:36:31.860296  469677 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 19:36:31.867513  469677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem --> /etc/ssl/certs/2864342.pem (1708 bytes)
	I1003 19:36:31.885054  469677 start.go:296] duration metric: took 143.173249ms for postStartSetup
	I1003 19:36:31.885428  469677 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-643397
	I1003 19:36:31.902133  469677 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/config.json ...
	I1003 19:36:31.902412  469677 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 19:36:31.902472  469677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-643397
	I1003 19:36:31.918558  469677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/no-preload-643397/id_rsa Username:docker}
	I1003 19:36:32.012703  469677 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1003 19:36:32.018111  469677 start.go:128] duration metric: took 7.134271436s to createHost
	I1003 19:36:32.018135  469677 start.go:83] releasing machines lock for "no-preload-643397", held for 7.134409604s
	I1003 19:36:32.018208  469677 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-643397
	I1003 19:36:32.035359  469677 ssh_runner.go:195] Run: cat /version.json
	I1003 19:36:32.035416  469677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-643397
	I1003 19:36:32.035661  469677 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 19:36:32.035730  469677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-643397
	I1003 19:36:32.056813  469677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/no-preload-643397/id_rsa Username:docker}
	I1003 19:36:32.057019  469677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/no-preload-643397/id_rsa Username:docker}
	I1003 19:36:32.247781  469677 ssh_runner.go:195] Run: systemctl --version
	I1003 19:36:32.254306  469677 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1003 19:36:32.289494  469677 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1003 19:36:32.294123  469677 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 19:36:32.294252  469677 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 19:36:32.324165  469677 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1003 19:36:32.324188  469677 start.go:495] detecting cgroup driver to use...
	I1003 19:36:32.324220  469677 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1003 19:36:32.324271  469677 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 19:36:32.342515  469677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 19:36:32.355242  469677 docker.go:218] disabling cri-docker service (if available) ...
	I1003 19:36:32.355336  469677 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1003 19:36:32.373198  469677 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1003 19:36:32.393125  469677 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1003 19:36:32.514303  469677 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1003 19:36:32.631659  469677 docker.go:234] disabling docker service ...
	I1003 19:36:32.631788  469677 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1003 19:36:32.656370  469677 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1003 19:36:32.670863  469677 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1003 19:36:32.791284  469677 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1003 19:36:32.911277  469677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1003 19:36:32.924107  469677 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 19:36:32.938287  469677 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1003 19:36:32.938366  469677 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:36:32.946968  469677 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1003 19:36:32.947047  469677 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:36:32.955545  469677 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:36:32.964065  469677 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:36:32.972790  469677 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 19:36:32.980705  469677 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:36:32.989640  469677 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:36:33.004406  469677 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:36:33.016483  469677 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 19:36:33.024887  469677 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 19:36:33.032762  469677 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 19:36:33.145045  469677 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1003 19:36:33.274369  469677 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1003 19:36:33.274467  469677 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1003 19:36:33.278514  469677 start.go:563] Will wait 60s for crictl version
	I1003 19:36:33.278611  469677 ssh_runner.go:195] Run: which crictl
	I1003 19:36:33.282251  469677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1003 19:36:33.311593  469677 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1003 19:36:33.311722  469677 ssh_runner.go:195] Run: crio --version
	I1003 19:36:33.340238  469677 ssh_runner.go:195] Run: crio --version
	I1003 19:36:33.373021  469677 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1003 19:36:33.375998  469677 cli_runner.go:164] Run: docker network inspect no-preload-643397 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 19:36:33.391502  469677 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1003 19:36:33.395406  469677 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 19:36:33.405040  469677 kubeadm.go:883] updating cluster {Name:no-preload-643397 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-643397 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1003 19:36:33.405163  469677 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 19:36:33.405211  469677 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 19:36:33.431075  469677 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1003 19:36:33.431098  469677 cache_images.go:89] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1003 19:36:33.431180  469677 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 19:36:33.431390  469677 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1003 19:36:33.431484  469677 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1003 19:36:33.431563  469677 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1003 19:36:33.431666  469677 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1003 19:36:33.431762  469677 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1003 19:36:33.431843  469677 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1003 19:36:33.431979  469677 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1003 19:36:33.433411  469677 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1003 19:36:33.433668  469677 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 19:36:33.434250  469677 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1003 19:36:33.434497  469677 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1003 19:36:33.434701  469677 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1003 19:36:33.434887  469677 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1003 19:36:33.435088  469677 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1003 19:36:33.435250  469677 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1003 19:36:33.664277  469677 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.1
	I1003 19:36:33.664905  469677 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.4-0
	I1003 19:36:33.686754  469677 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.1
	I1003 19:36:33.688953  469677 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.1
	I1003 19:36:33.693910  469677 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1003 19:36:33.695245  469677 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1003 19:36:33.703603  469677 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.1
	I1003 19:36:33.727298  469677 cache_images.go:117] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196" in container runtime
	I1003 19:36:33.727341  469677 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1003 19:36:33.727413  469677 ssh_runner.go:195] Run: which crictl
	I1003 19:36:33.731888  469677 cache_images.go:117] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e" in container runtime
	I1003 19:36:33.731937  469677 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1003 19:36:33.732001  469677 ssh_runner.go:195] Run: which crictl
	I1003 19:36:33.808862  469677 cache_images.go:117] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9" in container runtime
	I1003 19:36:33.808934  469677 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1003 19:36:33.809006  469677 ssh_runner.go:195] Run: which crictl
	I1003 19:36:33.822519  469677 cache_images.go:117] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0" in container runtime
	I1003 19:36:33.822562  469677 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1003 19:36:33.822661  469677 ssh_runner.go:195] Run: which crictl
	I1003 19:36:33.826959  469677 cache_images.go:117] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc" in container runtime
	I1003 19:36:33.827026  469677 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1003 19:36:33.827082  469677 ssh_runner.go:195] Run: which crictl
	I1003 19:36:33.827187  469677 cache_images.go:117] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1003 19:36:33.827222  469677 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1003 19:36:33.827255  469677 ssh_runner.go:195] Run: which crictl
	I1003 19:36:33.829319  469677 cache_images.go:117] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a" in container runtime
	I1003 19:36:33.829388  469677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1003 19:36:33.829419  469677 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1003 19:36:33.829492  469677 ssh_runner.go:195] Run: which crictl
	I1003 19:36:33.829518  469677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1003 19:36:33.829334  469677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1003 19:36:33.836401  469677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1003 19:36:33.836515  469677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1003 19:36:33.838188  469677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1003 19:36:33.919978  469677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1003 19:36:33.920083  469677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1003 19:36:33.920154  469677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1003 19:36:33.920238  469677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1003 19:36:33.932206  469677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1003 19:36:33.932323  469677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1003 19:36:33.932391  469677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1003 19:36:34.020085  469677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1003 19:36:34.020207  469677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1003 19:36:34.020288  469677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1003 19:36:34.020365  469677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1003 19:36:34.049008  469677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1003 19:36:34.049126  469677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1003 19:36:34.049207  469677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1003 19:36:34.167904  469677 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1003 19:36:34.168055  469677 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1003 19:36:34.168144  469677 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1003 19:36:34.168224  469677 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1003 19:36:34.168292  469677 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1003 19:36:34.168427  469677 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1003 19:36:34.172013  469677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1003 19:36:34.179883  469677 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1003 19:36:34.179981  469677 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1003 19:36:34.180078  469677 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1003 19:36:34.180122  469677 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1003 19:36:34.180194  469677 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1003 19:36:34.180226  469677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (24581632 bytes)
	I1003 19:36:34.180259  469677 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1003 19:36:34.180281  469677 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1003 19:36:34.180325  469677 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1003 19:36:34.180368  469677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (98216960 bytes)
	I1003 19:36:34.180454  469677 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1003 19:36:34.180478  469677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (22790144 bytes)
	I1003 19:36:34.280256  469677 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1003 19:36:34.280295  469677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (20402176 bytes)
	I1003 19:36:34.280348  469677 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1003 19:36:34.280365  469677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (15790592 bytes)
	I1003 19:36:34.280411  469677 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1003 19:36:34.280486  469677 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1003 19:36:34.280533  469677 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1003 19:36:34.280549  469677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	W1003 19:36:34.315289  469677 ssh_runner.go:129] session error, resetting client: ssh: rejected: connect failed (open failed)
	I1003 19:36:34.315337  469677 retry.go:31] will retry after 228.546049ms: ssh: rejected: connect failed (open failed)
	I1003 19:36:34.388834  469677 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1003 19:36:34.388883  469677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (20730880 bytes)
	I1003 19:36:34.388984  469677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-643397
	I1003 19:36:34.430781  469677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/no-preload-643397/id_rsa Username:docker}
	W1003 19:36:34.646437  469677 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1003 19:36:34.646672  469677 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 19:36:30.561067  470831 out.go:252] * Restarting existing docker container for "old-k8s-version-174543" ...
	I1003 19:36:30.561167  470831 cli_runner.go:164] Run: docker start old-k8s-version-174543
	I1003 19:36:30.899786  470831 cli_runner.go:164] Run: docker container inspect old-k8s-version-174543 --format={{.State.Status}}
	I1003 19:36:30.946093  470831 kic.go:430] container "old-k8s-version-174543" state is running.
	I1003 19:36:30.946478  470831 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-174543
	I1003 19:36:30.993439  470831 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/old-k8s-version-174543/config.json ...
	I1003 19:36:30.994728  470831 machine.go:93] provisionDockerMachine start ...
	I1003 19:36:30.994803  470831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-174543
	I1003 19:36:31.031278  470831 main.go:141] libmachine: Using SSH client type: native
	I1003 19:36:31.031607  470831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I1003 19:36:31.031621  470831 main.go:141] libmachine: About to run SSH command:
	hostname
	I1003 19:36:31.032316  470831 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44486->127.0.0.1:33428: read: connection reset by peer
	I1003 19:36:34.204180  470831 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-174543
	
	I1003 19:36:34.204274  470831 ubuntu.go:182] provisioning hostname "old-k8s-version-174543"
	I1003 19:36:34.204364  470831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-174543
	I1003 19:36:34.226862  470831 main.go:141] libmachine: Using SSH client type: native
	I1003 19:36:34.227164  470831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I1003 19:36:34.227176  470831 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-174543 && echo "old-k8s-version-174543" | sudo tee /etc/hostname
	I1003 19:36:34.402266  470831 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-174543
	
	I1003 19:36:34.402352  470831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-174543
	I1003 19:36:34.438692  470831 main.go:141] libmachine: Using SSH client type: native
	I1003 19:36:34.439122  470831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I1003 19:36:34.439145  470831 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-174543' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-174543/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-174543' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 19:36:34.605174  470831 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 19:36:34.605197  470831 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21625-284583/.minikube CaCertPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21625-284583/.minikube}
	I1003 19:36:34.605215  470831 ubuntu.go:190] setting up certificates
	I1003 19:36:34.605225  470831 provision.go:84] configureAuth start
	I1003 19:36:34.605292  470831 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-174543
	I1003 19:36:34.638381  470831 provision.go:143] copyHostCerts
	I1003 19:36:34.638446  470831 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-284583/.minikube/ca.pem, removing ...
	I1003 19:36:34.638463  470831 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-284583/.minikube/ca.pem
	I1003 19:36:34.638532  470831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21625-284583/.minikube/ca.pem (1082 bytes)
	I1003 19:36:34.638627  470831 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-284583/.minikube/cert.pem, removing ...
	I1003 19:36:34.638633  470831 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-284583/.minikube/cert.pem
	I1003 19:36:34.638661  470831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21625-284583/.minikube/cert.pem (1123 bytes)
	I1003 19:36:34.638725  470831 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-284583/.minikube/key.pem, removing ...
	I1003 19:36:34.638730  470831 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-284583/.minikube/key.pem
	I1003 19:36:34.638754  470831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21625-284583/.minikube/key.pem (1675 bytes)
	I1003 19:36:34.638805  470831 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21625-284583/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-174543 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-174543]
	I1003 19:36:35.486484  470831 provision.go:177] copyRemoteCerts
	I1003 19:36:35.486873  470831 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 19:36:35.486984  470831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-174543
	I1003 19:36:35.534150  470831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/old-k8s-version-174543/id_rsa Username:docker}
	I1003 19:36:35.650048  470831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 19:36:35.691502  470831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1003 19:36:35.733348  470831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1003 19:36:35.769920  470831 provision.go:87] duration metric: took 1.164682718s to configureAuth
	I1003 19:36:35.769944  470831 ubuntu.go:206] setting minikube options for container-runtime
	I1003 19:36:35.770141  470831 config.go:182] Loaded profile config "old-k8s-version-174543": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1003 19:36:35.770244  470831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-174543
	I1003 19:36:35.790817  470831 main.go:141] libmachine: Using SSH client type: native
	I1003 19:36:35.791140  470831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I1003 19:36:35.791162  470831 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1003 19:36:36.147469  470831 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1003 19:36:36.147497  470831 machine.go:96] duration metric: took 5.152751689s to provisionDockerMachine
	I1003 19:36:36.147509  470831 start.go:293] postStartSetup for "old-k8s-version-174543" (driver="docker")
	I1003 19:36:36.147542  470831 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 19:36:36.147641  470831 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 19:36:36.147697  470831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-174543
	I1003 19:36:36.177232  470831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/old-k8s-version-174543/id_rsa Username:docker}
	I1003 19:36:36.288843  470831 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 19:36:36.292704  470831 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1003 19:36:36.292790  470831 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1003 19:36:36.292816  470831 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-284583/.minikube/addons for local assets ...
	I1003 19:36:36.292902  470831 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-284583/.minikube/files for local assets ...
	I1003 19:36:36.293042  470831 filesync.go:149] local asset: /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem -> 2864342.pem in /etc/ssl/certs
	I1003 19:36:36.293214  470831 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 19:36:36.301319  470831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem --> /etc/ssl/certs/2864342.pem (1708 bytes)
	I1003 19:36:36.333038  470831 start.go:296] duration metric: took 185.510283ms for postStartSetup
	I1003 19:36:36.333203  470831 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 19:36:36.333279  470831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-174543
	I1003 19:36:36.386053  470831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/old-k8s-version-174543/id_rsa Username:docker}
	I1003 19:36:36.497817  470831 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1003 19:36:36.504278  470831 fix.go:56] duration metric: took 5.963165639s for fixHost
	I1003 19:36:36.504310  470831 start.go:83] releasing machines lock for "old-k8s-version-174543", held for 5.963220515s
	I1003 19:36:36.504391  470831 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-174543
	I1003 19:36:36.529637  470831 ssh_runner.go:195] Run: cat /version.json
	I1003 19:36:36.529696  470831 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 19:36:36.529769  470831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-174543
	I1003 19:36:36.529698  470831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-174543
	I1003 19:36:36.561759  470831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/old-k8s-version-174543/id_rsa Username:docker}
	I1003 19:36:36.573961  470831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/old-k8s-version-174543/id_rsa Username:docker}
	I1003 19:36:36.779306  470831 ssh_runner.go:195] Run: systemctl --version
	I1003 19:36:36.786533  470831 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1003 19:36:36.832494  470831 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1003 19:36:36.837907  470831 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 19:36:36.837987  470831 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 19:36:36.847208  470831 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1003 19:36:36.847261  470831 start.go:495] detecting cgroup driver to use...
	I1003 19:36:36.847295  470831 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1003 19:36:36.847354  470831 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 19:36:36.865816  470831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 19:36:36.880072  470831 docker.go:218] disabling cri-docker service (if available) ...
	I1003 19:36:36.880182  470831 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1003 19:36:36.897242  470831 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1003 19:36:36.911479  470831 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1003 19:36:37.052811  470831 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1003 19:36:37.188773  470831 docker.go:234] disabling docker service ...
	I1003 19:36:37.188916  470831 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1003 19:36:37.204769  470831 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1003 19:36:37.221757  470831 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1003 19:36:37.365939  470831 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1003 19:36:37.510943  470831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1003 19:36:37.524746  470831 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 19:36:37.543788  470831 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1003 19:36:37.543905  470831 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:36:37.554315  470831 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1003 19:36:37.554469  470831 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:36:37.564239  470831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:36:37.580279  470831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:36:37.595387  470831 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 19:36:37.603905  470831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:36:37.615691  470831 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:36:37.624764  470831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:36:37.633792  470831 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 19:36:37.642054  470831 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 19:36:37.651457  470831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 19:36:37.863516  470831 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1003 19:36:38.329902  470831 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1003 19:36:38.330025  470831 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1003 19:36:38.335449  470831 start.go:563] Will wait 60s for crictl version
	I1003 19:36:38.335577  470831 ssh_runner.go:195] Run: which crictl
	I1003 19:36:38.341293  470831 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1003 19:36:38.390604  470831 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1003 19:36:38.390763  470831 ssh_runner.go:195] Run: crio --version
	I1003 19:36:38.428125  470831 ssh_runner.go:195] Run: crio --version
	I1003 19:36:38.483368  470831 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1003 19:36:34.789323  469677 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1003 19:36:34.789417  469677 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1003 19:36:34.914066  469677 cache_images.go:117] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1003 19:36:34.914105  469677 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 19:36:34.914164  469677 ssh_runner.go:195] Run: which crictl
	I1003 19:36:35.250876  469677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 19:36:35.272126  469677 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1003 19:36:35.272225  469677 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1003 19:36:35.272326  469677 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1003 19:36:35.437308  469677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 19:36:37.594416  469677 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1: (2.322063589s)
	I1003 19:36:37.594439  469677 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1003 19:36:37.594455  469677 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1003 19:36:37.594503  469677 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1003 19:36:37.594555  469677 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.157226594s)
	I1003 19:36:37.594585  469677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 19:36:38.486518  470831 cli_runner.go:164] Run: docker network inspect old-k8s-version-174543 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 19:36:38.506334  470831 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1003 19:36:38.511730  470831 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 19:36:38.529412  470831 kubeadm.go:883] updating cluster {Name:old-k8s-version-174543 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-174543 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1003 19:36:38.529522  470831 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1003 19:36:38.529576  470831 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 19:36:38.585748  470831 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 19:36:38.585776  470831 crio.go:433] Images already preloaded, skipping extraction
	I1003 19:36:38.585830  470831 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 19:36:38.628275  470831 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 19:36:38.628301  470831 cache_images.go:85] Images are preloaded, skipping loading
	I1003 19:36:38.628309  470831 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1003 19:36:38.628411  470831 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-174543 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-174543 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1003 19:36:38.628491  470831 ssh_runner.go:195] Run: crio config
	I1003 19:36:38.721955  470831 cni.go:84] Creating CNI manager for ""
	I1003 19:36:38.721980  470831 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 19:36:38.721998  470831 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1003 19:36:38.722029  470831 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-174543 NodeName:old-k8s-version-174543 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1003 19:36:38.722181  470831 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-174543"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 19:36:38.722270  470831 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1003 19:36:38.734990  470831 binaries.go:44] Found k8s binaries, skipping transfer
	I1003 19:36:38.735069  470831 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1003 19:36:38.743828  470831 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1003 19:36:38.757632  470831 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 19:36:38.773219  470831 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1003 19:36:38.788811  470831 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1003 19:36:38.792770  470831 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 19:36:38.807893  470831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 19:36:38.987564  470831 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 19:36:39.006441  470831 certs.go:69] Setting up /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/old-k8s-version-174543 for IP: 192.168.85.2
	I1003 19:36:39.006529  470831 certs.go:195] generating shared ca certs ...
	I1003 19:36:39.006560  470831 certs.go:227] acquiring lock for ca certs: {Name:mk5a10e6c921326e9c211447576eaeb893259ba7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:36:39.006788  470831 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21625-284583/.minikube/ca.key
	I1003 19:36:39.006870  470831 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.key
	I1003 19:36:39.006906  470831 certs.go:257] generating profile certs ...
	I1003 19:36:39.007047  470831 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/old-k8s-version-174543/client.key
	I1003 19:36:39.007163  470831 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/old-k8s-version-174543/apiserver.key.09eade1b
	I1003 19:36:39.007236  470831 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/old-k8s-version-174543/proxy-client.key
	I1003 19:36:39.007404  470831 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/286434.pem (1338 bytes)
	W1003 19:36:39.007468  470831 certs.go:480] ignoring /home/jenkins/minikube-integration/21625-284583/.minikube/certs/286434_empty.pem, impossibly tiny 0 bytes
	I1003 19:36:39.007494  470831 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 19:36:39.007563  470831 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem (1082 bytes)
	I1003 19:36:39.007612  470831 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem (1123 bytes)
	I1003 19:36:39.007665  470831 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/key.pem (1675 bytes)
	I1003 19:36:39.007744  470831 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem (1708 bytes)
	I1003 19:36:39.008444  470831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 19:36:39.070910  470831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1003 19:36:39.102477  470831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 19:36:39.131859  470831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1003 19:36:39.182220  470831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/old-k8s-version-174543/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1003 19:36:39.222848  470831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/old-k8s-version-174543/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1003 19:36:39.247686  470831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/old-k8s-version-174543/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 19:36:39.285222  470831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/old-k8s-version-174543/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1003 19:36:39.310065  470831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 19:36:39.341730  470831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/certs/286434.pem --> /usr/share/ca-certificates/286434.pem (1338 bytes)
	I1003 19:36:39.391536  470831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem --> /usr/share/ca-certificates/2864342.pem (1708 bytes)
	I1003 19:36:39.419719  470831 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 19:36:39.435425  470831 ssh_runner.go:195] Run: openssl version
	I1003 19:36:39.442930  470831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 19:36:39.453766  470831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 19:36:39.457959  470831 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  3 18:27 /usr/share/ca-certificates/minikubeCA.pem
	I1003 19:36:39.458064  470831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 19:36:39.503965  470831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 19:36:39.513478  470831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/286434.pem && ln -fs /usr/share/ca-certificates/286434.pem /etc/ssl/certs/286434.pem"
	I1003 19:36:39.521868  470831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/286434.pem
	I1003 19:36:39.526259  470831 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  3 18:34 /usr/share/ca-certificates/286434.pem
	I1003 19:36:39.526366  470831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/286434.pem
	I1003 19:36:39.576035  470831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/286434.pem /etc/ssl/certs/51391683.0"
	I1003 19:36:39.587037  470831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2864342.pem && ln -fs /usr/share/ca-certificates/2864342.pem /etc/ssl/certs/2864342.pem"
	I1003 19:36:39.596148  470831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2864342.pem
	I1003 19:36:39.600440  470831 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  3 18:34 /usr/share/ca-certificates/2864342.pem
	I1003 19:36:39.600506  470831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2864342.pem
	I1003 19:36:39.642070  470831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2864342.pem /etc/ssl/certs/3ec20f2e.0"
	I1003 19:36:39.650706  470831 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 19:36:39.654963  470831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1003 19:36:39.699817  470831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1003 19:36:39.741524  470831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1003 19:36:39.810137  470831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1003 19:36:39.867659  470831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1003 19:36:39.963823  470831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1003 19:36:40.065488  470831 kubeadm.go:400] StartCluster: {Name:old-k8s-version-174543 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-174543 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 19:36:40.065602  470831 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1003 19:36:40.065684  470831 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1003 19:36:40.150314  470831 cri.go:89] found id: "9d777d7ca3f3aae2a67724d1a6f8ab7dbc9844b33527c107ab163508dd940d95"
	I1003 19:36:40.150342  470831 cri.go:89] found id: "62ef8d10feba1f56202dc665fa46660c227322fdddf49c3e984ffb9430f54164"
	I1003 19:36:40.150348  470831 cri.go:89] found id: ""
	I1003 19:36:40.150431  470831 ssh_runner.go:195] Run: sudo runc list -f json
	W1003 19:36:40.209366  470831 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-03T19:36:40Z" level=error msg="open /run/runc: no such file or directory"
	I1003 19:36:40.209465  470831 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 19:36:40.238212  470831 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1003 19:36:40.238235  470831 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1003 19:36:40.238287  470831 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1003 19:36:40.309274  470831 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1003 19:36:40.309771  470831 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-174543" does not appear in /home/jenkins/minikube-integration/21625-284583/kubeconfig
	I1003 19:36:40.309937  470831 kubeconfig.go:62] /home/jenkins/minikube-integration/21625-284583/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-174543" cluster setting kubeconfig missing "old-k8s-version-174543" context setting]
	I1003 19:36:40.310734  470831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/kubeconfig: {Name:mkc1323fd87f4a78231a26d2dab0dff7feecf1e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:36:40.317747  470831 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1003 19:36:40.341224  470831 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1003 19:36:40.341310  470831 kubeadm.go:601] duration metric: took 103.068172ms to restartPrimaryControlPlane
	I1003 19:36:40.341334  470831 kubeadm.go:402] duration metric: took 275.871441ms to StartCluster
	I1003 19:36:40.341373  470831 settings.go:142] acquiring lock: {Name:mkc95577dbc448e3409dfa2b5e53a3a1327cb451 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:36:40.341463  470831 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21625-284583/kubeconfig
	I1003 19:36:40.342096  470831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/kubeconfig: {Name:mkc1323fd87f4a78231a26d2dab0dff7feecf1e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:36:40.342580  470831 config.go:182] Loaded profile config "old-k8s-version-174543": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1003 19:36:40.342648  470831 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1003 19:36:40.342700  470831 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1003 19:36:40.342845  470831 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-174543"
	I1003 19:36:40.342859  470831 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-174543"
	W1003 19:36:40.342865  470831 addons.go:247] addon storage-provisioner should already be in state true
	I1003 19:36:40.342887  470831 host.go:66] Checking if "old-k8s-version-174543" exists ...
	I1003 19:36:40.343383  470831 cli_runner.go:164] Run: docker container inspect old-k8s-version-174543 --format={{.State.Status}}
	I1003 19:36:40.343941  470831 addons.go:69] Setting dashboard=true in profile "old-k8s-version-174543"
	I1003 19:36:40.343965  470831 addons.go:238] Setting addon dashboard=true in "old-k8s-version-174543"
	W1003 19:36:40.343972  470831 addons.go:247] addon dashboard should already be in state true
	I1003 19:36:40.343995  470831 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-174543"
	I1003 19:36:40.344029  470831 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-174543"
	I1003 19:36:40.344003  470831 host.go:66] Checking if "old-k8s-version-174543" exists ...
	I1003 19:36:40.344381  470831 cli_runner.go:164] Run: docker container inspect old-k8s-version-174543 --format={{.State.Status}}
	I1003 19:36:40.344524  470831 cli_runner.go:164] Run: docker container inspect old-k8s-version-174543 --format={{.State.Status}}
	I1003 19:36:40.355866  470831 out.go:179] * Verifying Kubernetes components...
	I1003 19:36:40.368882  470831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 19:36:40.393921  470831 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-174543"
	W1003 19:36:40.393943  470831 addons.go:247] addon default-storageclass should already be in state true
	I1003 19:36:40.393969  470831 host.go:66] Checking if "old-k8s-version-174543" exists ...
	I1003 19:36:40.394399  470831 cli_runner.go:164] Run: docker container inspect old-k8s-version-174543 --format={{.State.Status}}
	I1003 19:36:40.408117  470831 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1003 19:36:40.411103  470831 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1003 19:36:40.414544  470831 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1003 19:36:40.414581  470831 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1003 19:36:40.414658  470831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-174543
	I1003 19:36:40.416772  470831 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 19:36:39.907186  469677 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.312579852s)
	I1003 19:36:39.907232  469677 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1003 19:36:39.907321  469677 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1003 19:36:39.907451  469677 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (2.312938627s)
	I1003 19:36:39.907466  469677 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1003 19:36:39.907481  469677 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1003 19:36:39.907512  469677 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1003 19:36:42.321165  469677 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (2.413625674s)
	I1003 19:36:42.321196  469677 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1003 19:36:42.321217  469677 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1003 19:36:42.321273  469677 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1003 19:36:42.321343  469677 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.414004815s)
	I1003 19:36:42.321363  469677 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1003 19:36:42.321381  469677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1003 19:36:44.503824  469677 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (2.182522238s)
	I1003 19:36:44.503853  469677 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1003 19:36:44.503873  469677 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1003 19:36:44.503931  469677 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1003 19:36:40.420887  470831 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 19:36:40.420912  470831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1003 19:36:40.420985  470831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-174543
	I1003 19:36:40.436447  470831 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1003 19:36:40.436474  470831 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1003 19:36:40.436538  470831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-174543
	I1003 19:36:40.468390  470831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/old-k8s-version-174543/id_rsa Username:docker}
	I1003 19:36:40.480958  470831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/old-k8s-version-174543/id_rsa Username:docker}
	I1003 19:36:40.491657  470831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/old-k8s-version-174543/id_rsa Username:docker}
	I1003 19:36:40.827254  470831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1003 19:36:40.871029  470831 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 19:36:40.871939  470831 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1003 19:36:40.871991  470831 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1003 19:36:40.905985  470831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 19:36:41.091414  470831 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1003 19:36:41.091481  470831 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1003 19:36:41.259108  470831 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1003 19:36:41.259190  470831 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1003 19:36:41.387179  470831 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1003 19:36:41.387248  470831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1003 19:36:41.463609  470831 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1003 19:36:41.463688  470831 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1003 19:36:41.521284  470831 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1003 19:36:41.521352  470831 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1003 19:36:41.571662  470831 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1003 19:36:41.571743  470831 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1003 19:36:41.606256  470831 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1003 19:36:41.606330  470831 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1003 19:36:41.633779  470831 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1003 19:36:41.633855  470831 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1003 19:36:41.682876  470831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1003 19:36:46.122072  469677 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.618115518s)
	I1003 19:36:46.122096  469677 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1003 19:36:46.122116  469677 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1003 19:36:46.122163  469677 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	I1003 19:36:50.486681  470831 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.659350106s)
	I1003 19:36:50.486868  470831 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (9.615770052s)
	I1003 19:36:50.486999  470831 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-174543" to be "Ready" ...
	I1003 19:36:50.563494  470831 node_ready.go:49] node "old-k8s-version-174543" is "Ready"
	I1003 19:36:50.563627  470831 node_ready.go:38] duration metric: took 76.592907ms for node "old-k8s-version-174543" to be "Ready" ...
	I1003 19:36:50.563657  470831 api_server.go:52] waiting for apiserver process to appear ...
	I1003 19:36:50.563753  470831 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 19:36:51.281166  470831 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.375104087s)
	I1003 19:36:52.074932  470831 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (10.391968891s)
	I1003 19:36:52.075163  470831 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.511377919s)
	I1003 19:36:52.075208  470831 api_server.go:72] duration metric: took 11.73243648s to wait for apiserver process to appear ...
	I1003 19:36:52.075222  470831 api_server.go:88] waiting for apiserver healthz status ...
	I1003 19:36:52.075241  470831 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1003 19:36:52.078448  470831 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-174543 addons enable metrics-server
	
	I1003 19:36:52.081625  470831 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1003 19:36:51.524837  469677 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (5.402654076s)
	I1003 19:36:51.524919  469677 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1003 19:36:51.524959  469677 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1003 19:36:51.525037  469677 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1003 19:36:52.294734  469677 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1003 19:36:52.294769  469677 cache_images.go:124] Successfully loaded all cached images
	I1003 19:36:52.294775  469677 cache_images.go:93] duration metric: took 18.863661907s to LoadCachedImages
	I1003 19:36:52.294786  469677 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1003 19:36:52.294879  469677 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-643397 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-643397 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1003 19:36:52.294960  469677 ssh_runner.go:195] Run: crio config
	I1003 19:36:52.364057  469677 cni.go:84] Creating CNI manager for ""
	I1003 19:36:52.364129  469677 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 19:36:52.364175  469677 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1003 19:36:52.364218  469677 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-643397 NodeName:no-preload-643397 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1003 19:36:52.364407  469677 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-643397"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 19:36:52.364517  469677 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1003 19:36:52.372571  469677 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1003 19:36:52.372685  469677 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1003 19:36:52.380593  469677 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
	I1003 19:36:52.380716  469677 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1003 19:36:52.380924  469677 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21625-284583/.minikube/cache/linux/arm64/v1.34.1/kubeadm
	I1003 19:36:52.381339  469677 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/21625-284583/.minikube/cache/linux/arm64/v1.34.1/kubelet
	I1003 19:36:52.386113  469677 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1003 19:36:52.386150  469677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/cache/linux/arm64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (58130616 bytes)
	I1003 19:36:53.545881  469677 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1003 19:36:53.549863  469677 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1003 19:36:53.549894  469677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/cache/linux/arm64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (71434424 bytes)
	I1003 19:36:53.709681  469677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 19:36:53.732427  469677 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1003 19:36:53.746177  469677 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1003 19:36:53.746216  469677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/cache/linux/arm64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (56426788 bytes)
	I1003 19:36:54.331746  469677 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1003 19:36:54.343207  469677 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1003 19:36:54.358285  469677 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 19:36:54.373325  469677 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1003 19:36:54.388029  469677 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1003 19:36:54.393493  469677 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 19:36:54.406615  469677 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 19:36:54.534391  469677 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 19:36:54.563833  469677 certs.go:69] Setting up /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397 for IP: 192.168.76.2
	I1003 19:36:54.563855  469677 certs.go:195] generating shared ca certs ...
	I1003 19:36:54.563872  469677 certs.go:227] acquiring lock for ca certs: {Name:mk5a10e6c921326e9c211447576eaeb893259ba7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:36:54.564060  469677 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21625-284583/.minikube/ca.key
	I1003 19:36:54.564138  469677 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.key
	I1003 19:36:54.564177  469677 certs.go:257] generating profile certs ...
	I1003 19:36:54.564260  469677 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/client.key
	I1003 19:36:54.564282  469677 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/client.crt with IP's: []
	I1003 19:36:52.084106  470831 addons.go:514] duration metric: took 11.741369469s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1003 19:36:52.092617  470831 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1003 19:36:52.094379  470831 api_server.go:141] control plane version: v1.28.0
	I1003 19:36:52.094453  470831 api_server.go:131] duration metric: took 19.211581ms to wait for apiserver health ...
	I1003 19:36:52.094475  470831 system_pods.go:43] waiting for kube-system pods to appear ...
	I1003 19:36:52.104999  470831 system_pods.go:59] 8 kube-system pods found
	I1003 19:36:52.105093  470831 system_pods.go:61] "coredns-5dd5756b68-6grkm" [678e0c98-f42a-4a69-8d50-a83a82886a69] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1003 19:36:52.105116  470831 system_pods.go:61] "etcd-old-k8s-version-174543" [8550f5a6-a2dc-4e9b-b623-9d0d9dfd66fd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1003 19:36:52.105151  470831 system_pods.go:61] "kindnet-rwdd6" [3cc7fea5-9441-4250-80b2-05aff82ce727] Running
	I1003 19:36:52.105178  470831 system_pods.go:61] "kube-apiserver-old-k8s-version-174543" [b8ce8574-fafd-4466-b9b8-b12c3ae221b7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1003 19:36:52.105201  470831 system_pods.go:61] "kube-controller-manager-old-k8s-version-174543" [aea29031-128c-4683-b165-ef6f11b79e72] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1003 19:36:52.105235  470831 system_pods.go:61] "kube-proxy-v4mqk" [50d549bb-e122-45af-8dad-b599f07053fd] Running
	I1003 19:36:52.105261  470831 system_pods.go:61] "kube-scheduler-old-k8s-version-174543" [3b73907b-8446-4189-9d96-e02a6c332aa6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1003 19:36:52.105279  470831 system_pods.go:61] "storage-provisioner" [8db23fd8-6872-4901-b61f-a88ac26407a7] Running
	I1003 19:36:52.105314  470831 system_pods.go:74] duration metric: took 10.804885ms to wait for pod list to return data ...
	I1003 19:36:52.105341  470831 default_sa.go:34] waiting for default service account to be created ...
	I1003 19:36:52.109408  470831 default_sa.go:45] found service account: "default"
	I1003 19:36:52.109473  470831 default_sa.go:55] duration metric: took 4.111364ms for default service account to be created ...
	I1003 19:36:52.109507  470831 system_pods.go:116] waiting for k8s-apps to be running ...
	I1003 19:36:52.113674  470831 system_pods.go:86] 8 kube-system pods found
	I1003 19:36:52.113760  470831 system_pods.go:89] "coredns-5dd5756b68-6grkm" [678e0c98-f42a-4a69-8d50-a83a82886a69] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1003 19:36:52.113785  470831 system_pods.go:89] "etcd-old-k8s-version-174543" [8550f5a6-a2dc-4e9b-b623-9d0d9dfd66fd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1003 19:36:52.113822  470831 system_pods.go:89] "kindnet-rwdd6" [3cc7fea5-9441-4250-80b2-05aff82ce727] Running
	I1003 19:36:52.113847  470831 system_pods.go:89] "kube-apiserver-old-k8s-version-174543" [b8ce8574-fafd-4466-b9b8-b12c3ae221b7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1003 19:36:52.113871  470831 system_pods.go:89] "kube-controller-manager-old-k8s-version-174543" [aea29031-128c-4683-b165-ef6f11b79e72] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1003 19:36:52.113906  470831 system_pods.go:89] "kube-proxy-v4mqk" [50d549bb-e122-45af-8dad-b599f07053fd] Running
	I1003 19:36:52.113933  470831 system_pods.go:89] "kube-scheduler-old-k8s-version-174543" [3b73907b-8446-4189-9d96-e02a6c332aa6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1003 19:36:52.113953  470831 system_pods.go:89] "storage-provisioner" [8db23fd8-6872-4901-b61f-a88ac26407a7] Running
	I1003 19:36:52.113990  470831 system_pods.go:126] duration metric: took 4.462457ms to wait for k8s-apps to be running ...
	I1003 19:36:52.114017  470831 system_svc.go:44] waiting for kubelet service to be running ....
	I1003 19:36:52.114104  470831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 19:36:52.129798  470831 system_svc.go:56] duration metric: took 15.772795ms WaitForService to wait for kubelet
	I1003 19:36:52.129872  470831 kubeadm.go:586] duration metric: took 11.787098529s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 19:36:52.129906  470831 node_conditions.go:102] verifying NodePressure condition ...
	I1003 19:36:52.133219  470831 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1003 19:36:52.133315  470831 node_conditions.go:123] node cpu capacity is 2
	I1003 19:36:52.133345  470831 node_conditions.go:105] duration metric: took 3.421679ms to run NodePressure ...
	I1003 19:36:52.133386  470831 start.go:241] waiting for startup goroutines ...
	I1003 19:36:52.133413  470831 start.go:246] waiting for cluster config update ...
	I1003 19:36:52.133439  470831 start.go:255] writing updated cluster config ...
	I1003 19:36:52.133757  470831 ssh_runner.go:195] Run: rm -f paused
	I1003 19:36:52.138185  470831 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1003 19:36:52.143212  470831 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-6grkm" in "kube-system" namespace to be "Ready" or be gone ...
	W1003 19:36:54.151250  470831 pod_ready.go:104] pod "coredns-5dd5756b68-6grkm" is not "Ready", error: <nil>
	I1003 19:36:54.723061  469677 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/client.crt ...
	I1003 19:36:54.723102  469677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/client.crt: {Name:mkea5bfb95d8fdb117792960e5221a8bc9115b50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:36:54.723346  469677 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/client.key ...
	I1003 19:36:54.723364  469677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/client.key: {Name:mkf4738ba9e553f9f9be1784d2e0f6c375d691df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:36:54.723521  469677 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/apiserver.key.ee2e84a9
	I1003 19:36:54.723538  469677 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/apiserver.crt.ee2e84a9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1003 19:36:55.207794  469677 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/apiserver.crt.ee2e84a9 ...
	I1003 19:36:55.207868  469677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/apiserver.crt.ee2e84a9: {Name:mk19ce55b7f476d867b58a46a648e11db58f5a77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:36:55.208085  469677 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/apiserver.key.ee2e84a9 ...
	I1003 19:36:55.208125  469677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/apiserver.key.ee2e84a9: {Name:mkc44185d4065ec27cc61b06ce0bc9de1613954b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:36:55.208247  469677 certs.go:382] copying /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/apiserver.crt.ee2e84a9 -> /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/apiserver.crt
	I1003 19:36:55.208353  469677 certs.go:386] copying /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/apiserver.key.ee2e84a9 -> /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/apiserver.key
	I1003 19:36:55.208436  469677 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/proxy-client.key
	I1003 19:36:55.208469  469677 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/proxy-client.crt with IP's: []
	I1003 19:36:56.304461  469677 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/proxy-client.crt ...
	I1003 19:36:56.304494  469677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/proxy-client.crt: {Name:mkb08c6c1be2a70b1e5ff3f6ddde2e4e9c47ee6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:36:56.304684  469677 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/proxy-client.key ...
	I1003 19:36:56.304701  469677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/proxy-client.key: {Name:mk1a2d478a1729a17beec4d720ca7883e92f1491 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:36:56.304906  469677 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/286434.pem (1338 bytes)
	W1003 19:36:56.304950  469677 certs.go:480] ignoring /home/jenkins/minikube-integration/21625-284583/.minikube/certs/286434_empty.pem, impossibly tiny 0 bytes
	I1003 19:36:56.304965  469677 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 19:36:56.304990  469677 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem (1082 bytes)
	I1003 19:36:56.305016  469677 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem (1123 bytes)
	I1003 19:36:56.305042  469677 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/key.pem (1675 bytes)
	I1003 19:36:56.305090  469677 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem (1708 bytes)
	I1003 19:36:56.305635  469677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 19:36:56.325874  469677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1003 19:36:56.344837  469677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 19:36:56.363293  469677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1003 19:36:56.381085  469677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1003 19:36:56.400919  469677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1003 19:36:56.419228  469677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 19:36:56.438028  469677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1003 19:36:56.455936  469677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem --> /usr/share/ca-certificates/2864342.pem (1708 bytes)
	I1003 19:36:56.474212  469677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 19:36:56.491955  469677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/certs/286434.pem --> /usr/share/ca-certificates/286434.pem (1338 bytes)
	I1003 19:36:56.510065  469677 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 19:36:56.524259  469677 ssh_runner.go:195] Run: openssl version
	I1003 19:36:56.534016  469677 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/286434.pem && ln -fs /usr/share/ca-certificates/286434.pem /etc/ssl/certs/286434.pem"
	I1003 19:36:56.543214  469677 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/286434.pem
	I1003 19:36:56.547972  469677 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  3 18:34 /usr/share/ca-certificates/286434.pem
	I1003 19:36:56.548066  469677 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/286434.pem
	I1003 19:36:56.591319  469677 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/286434.pem /etc/ssl/certs/51391683.0"
	I1003 19:36:56.600012  469677 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2864342.pem && ln -fs /usr/share/ca-certificates/2864342.pem /etc/ssl/certs/2864342.pem"
	I1003 19:36:56.608753  469677 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2864342.pem
	I1003 19:36:56.612596  469677 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  3 18:34 /usr/share/ca-certificates/2864342.pem
	I1003 19:36:56.612712  469677 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2864342.pem
	I1003 19:36:56.654061  469677 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2864342.pem /etc/ssl/certs/3ec20f2e.0"
	I1003 19:36:56.662615  469677 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 19:36:56.672208  469677 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 19:36:56.676572  469677 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  3 18:27 /usr/share/ca-certificates/minikubeCA.pem
	I1003 19:36:56.676683  469677 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 19:36:56.717711  469677 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 19:36:56.729797  469677 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 19:36:56.737585  469677 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1003 19:36:56.737637  469677 kubeadm.go:400] StartCluster: {Name:no-preload-643397 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-643397 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 19:36:56.737710  469677 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1003 19:36:56.737768  469677 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1003 19:36:56.780132  469677 cri.go:89] found id: ""
	I1003 19:36:56.780210  469677 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 19:36:56.789811  469677 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1003 19:36:56.797624  469677 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1003 19:36:56.797736  469677 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 19:36:56.805674  469677 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1003 19:36:56.805698  469677 kubeadm.go:157] found existing configuration files:
	
	I1003 19:36:56.805776  469677 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1003 19:36:56.814539  469677 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1003 19:36:56.814648  469677 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1003 19:36:56.822346  469677 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1003 19:36:56.829610  469677 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1003 19:36:56.829675  469677 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1003 19:36:56.836933  469677 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1003 19:36:56.852916  469677 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1003 19:36:56.852987  469677 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1003 19:36:56.863551  469677 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1003 19:36:56.873992  469677 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1003 19:36:56.874054  469677 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1003 19:36:56.882629  469677 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1003 19:36:56.923304  469677 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1003 19:36:56.923637  469677 kubeadm.go:318] [preflight] Running pre-flight checks
	I1003 19:36:56.956544  469677 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1003 19:36:56.956622  469677 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1003 19:36:56.956664  469677 kubeadm.go:318] OS: Linux
	I1003 19:36:56.956718  469677 kubeadm.go:318] CGROUPS_CPU: enabled
	I1003 19:36:56.956801  469677 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1003 19:36:56.956857  469677 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1003 19:36:56.956912  469677 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1003 19:36:56.956970  469677 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1003 19:36:56.957025  469677 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1003 19:36:56.957075  469677 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1003 19:36:56.957129  469677 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1003 19:36:56.957182  469677 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1003 19:36:57.030788  469677 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1003 19:36:57.030916  469677 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1003 19:36:57.031019  469677 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1003 19:36:57.050939  469677 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1003 19:36:57.055510  469677 out.go:252]   - Generating certificates and keys ...
	I1003 19:36:57.055689  469677 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1003 19:36:57.055808  469677 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1003 19:36:57.836445  469677 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1003 19:36:57.912322  469677 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1003 19:36:58.196922  469677 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1003 19:36:58.587327  469677 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1003 19:36:58.751249  469677 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1003 19:36:58.751615  469677 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-643397] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1003 19:36:58.838899  469677 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1003 19:36:58.839218  469677 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-643397] and IPs [192.168.76.2 127.0.0.1 ::1]
	W1003 19:36:56.152283  470831 pod_ready.go:104] pod "coredns-5dd5756b68-6grkm" is not "Ready", error: <nil>
	W1003 19:36:58.650953  470831 pod_ready.go:104] pod "coredns-5dd5756b68-6grkm" is not "Ready", error: <nil>
	I1003 19:36:59.776416  469677 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1003 19:37:00.060836  469677 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1003 19:37:00.317856  469677 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1003 19:37:00.318288  469677 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1003 19:37:00.476997  469677 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1003 19:37:00.676428  469677 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1003 19:37:00.863403  469677 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1003 19:37:01.550407  469677 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1003 19:37:02.648554  469677 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1003 19:37:02.648666  469677 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1003 19:37:02.648780  469677 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1003 19:37:02.652441  469677 out.go:252]   - Booting up control plane ...
	I1003 19:37:02.652564  469677 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1003 19:37:02.652647  469677 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1003 19:37:02.652719  469677 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1003 19:37:02.670695  469677 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1003 19:37:02.670820  469677 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1003 19:37:02.682650  469677 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1003 19:37:02.682776  469677 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1003 19:37:02.682820  469677 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1003 19:37:02.856554  469677 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1003 19:37:02.856720  469677 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1003 19:37:03.858878  469677 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.002481179s
	I1003 19:37:03.862941  469677 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1003 19:37:03.863050  469677 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1003 19:37:03.863150  469677 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1003 19:37:03.863894  469677 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1003 19:37:00.658147  470831 pod_ready.go:104] pod "coredns-5dd5756b68-6grkm" is not "Ready", error: <nil>
	W1003 19:37:03.151308  470831 pod_ready.go:104] pod "coredns-5dd5756b68-6grkm" is not "Ready", error: <nil>
	I1003 19:37:08.071258  469677 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 4.207141162s
	W1003 19:37:05.651702  470831 pod_ready.go:104] pod "coredns-5dd5756b68-6grkm" is not "Ready", error: <nil>
	W1003 19:37:07.652884  470831 pod_ready.go:104] pod "coredns-5dd5756b68-6grkm" is not "Ready", error: <nil>
	W1003 19:37:09.653756  470831 pod_ready.go:104] pod "coredns-5dd5756b68-6grkm" is not "Ready", error: <nil>
	I1003 19:37:10.649991  469677 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 6.781485326s
	I1003 19:37:12.866223  469677 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 9.002847252s
	I1003 19:37:12.888325  469677 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1003 19:37:12.909020  469677 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1003 19:37:12.954407  469677 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1003 19:37:12.954615  469677 kubeadm.go:318] [mark-control-plane] Marking the node no-preload-643397 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1003 19:37:12.978776  469677 kubeadm.go:318] [bootstrap-token] Using token: dz2q20.oxlpcyn3z86knmhs
	I1003 19:37:12.981972  469677 out.go:252]   - Configuring RBAC rules ...
	I1003 19:37:12.982125  469677 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1003 19:37:13.013673  469677 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1003 19:37:13.047764  469677 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1003 19:37:13.065884  469677 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1003 19:37:13.070997  469677 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1003 19:37:13.076272  469677 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1003 19:37:13.273866  469677 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1003 19:37:13.818579  469677 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1003 19:37:14.284423  469677 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1003 19:37:14.285888  469677 kubeadm.go:318] 
	I1003 19:37:14.285967  469677 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1003 19:37:14.285973  469677 kubeadm.go:318] 
	I1003 19:37:14.286054  469677 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1003 19:37:14.286060  469677 kubeadm.go:318] 
	I1003 19:37:14.286087  469677 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1003 19:37:14.286473  469677 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1003 19:37:14.286531  469677 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1003 19:37:14.286537  469677 kubeadm.go:318] 
	I1003 19:37:14.286593  469677 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1003 19:37:14.286598  469677 kubeadm.go:318] 
	I1003 19:37:14.286651  469677 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1003 19:37:14.286656  469677 kubeadm.go:318] 
	I1003 19:37:14.286711  469677 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1003 19:37:14.286789  469677 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1003 19:37:14.286872  469677 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1003 19:37:14.286883  469677 kubeadm.go:318] 
	I1003 19:37:14.287175  469677 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1003 19:37:14.287279  469677 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1003 19:37:14.287285  469677 kubeadm.go:318] 
	I1003 19:37:14.287544  469677 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token dz2q20.oxlpcyn3z86knmhs \
	I1003 19:37:14.287665  469677 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:f66ff31263aa4cda6b17caa2076838d6a1918275f1c2773b90b119c0d4a4d71a \
	I1003 19:37:14.287847  469677 kubeadm.go:318] 	--control-plane 
	I1003 19:37:14.287875  469677 kubeadm.go:318] 
	I1003 19:37:14.288110  469677 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1003 19:37:14.288128  469677 kubeadm.go:318] 
	I1003 19:37:14.288393  469677 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token dz2q20.oxlpcyn3z86knmhs \
	I1003 19:37:14.288650  469677 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:f66ff31263aa4cda6b17caa2076838d6a1918275f1c2773b90b119c0d4a4d71a 
	I1003 19:37:14.293244  469677 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1003 19:37:14.293485  469677 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1003 19:37:14.293601  469677 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1003 19:37:14.293622  469677 cni.go:84] Creating CNI manager for ""
	I1003 19:37:14.293634  469677 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 19:37:14.299735  469677 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1003 19:37:14.303086  469677 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1003 19:37:14.309906  469677 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1003 19:37:14.309930  469677 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1003 19:37:14.336322  469677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	W1003 19:37:11.655175  470831 pod_ready.go:104] pod "coredns-5dd5756b68-6grkm" is not "Ready", error: <nil>
	W1003 19:37:13.657155  470831 pod_ready.go:104] pod "coredns-5dd5756b68-6grkm" is not "Ready", error: <nil>
	I1003 19:37:14.811333  469677 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1003 19:37:14.811471  469677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 19:37:14.811560  469677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-643397 minikube.k8s.io/updated_at=2025_10_03T19_37_14_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=a43873c79fc22f8b1ccd29d3dfa635d392b09335 minikube.k8s.io/name=no-preload-643397 minikube.k8s.io/primary=true
	I1003 19:37:15.177419  469677 ops.go:34] apiserver oom_adj: -16
	I1003 19:37:15.177535  469677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 19:37:15.678053  469677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 19:37:16.177675  469677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 19:37:16.678465  469677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 19:37:17.177605  469677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 19:37:17.678441  469677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 19:37:18.177833  469677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 19:37:18.678473  469677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 19:37:19.177998  469677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 19:37:19.303395  469677 kubeadm.go:1113] duration metric: took 4.491974475s to wait for elevateKubeSystemPrivileges
	I1003 19:37:19.303422  469677 kubeadm.go:402] duration metric: took 22.565789399s to StartCluster
	I1003 19:37:19.303440  469677 settings.go:142] acquiring lock: {Name:mkc95577dbc448e3409dfa2b5e53a3a1327cb451 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:37:19.303498  469677 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21625-284583/kubeconfig
	I1003 19:37:19.304437  469677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/kubeconfig: {Name:mkc1323fd87f4a78231a26d2dab0dff7feecf1e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:37:19.304655  469677 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1003 19:37:19.304785  469677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1003 19:37:19.305028  469677 config.go:182] Loaded profile config "no-preload-643397": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 19:37:19.305059  469677 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1003 19:37:19.305117  469677 addons.go:69] Setting storage-provisioner=true in profile "no-preload-643397"
	I1003 19:37:19.305134  469677 addons.go:238] Setting addon storage-provisioner=true in "no-preload-643397"
	I1003 19:37:19.305155  469677 host.go:66] Checking if "no-preload-643397" exists ...
	I1003 19:37:19.305706  469677 addons.go:69] Setting default-storageclass=true in profile "no-preload-643397"
	I1003 19:37:19.305744  469677 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-643397"
	I1003 19:37:19.306024  469677 cli_runner.go:164] Run: docker container inspect no-preload-643397 --format={{.State.Status}}
	I1003 19:37:19.306036  469677 cli_runner.go:164] Run: docker container inspect no-preload-643397 --format={{.State.Status}}
	I1003 19:37:19.309052  469677 out.go:179] * Verifying Kubernetes components...
	I1003 19:37:19.315256  469677 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 19:37:19.344959  469677 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 19:37:19.350292  469677 addons.go:238] Setting addon default-storageclass=true in "no-preload-643397"
	I1003 19:37:19.350335  469677 host.go:66] Checking if "no-preload-643397" exists ...
	I1003 19:37:19.350745  469677 cli_runner.go:164] Run: docker container inspect no-preload-643397 --format={{.State.Status}}
	I1003 19:37:19.350945  469677 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 19:37:19.350970  469677 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1003 19:37:19.351010  469677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-643397
	I1003 19:37:19.400750  469677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/no-preload-643397/id_rsa Username:docker}
	I1003 19:37:19.407421  469677 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1003 19:37:19.407447  469677 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1003 19:37:19.407509  469677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-643397
	I1003 19:37:19.433989  469677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/no-preload-643397/id_rsa Username:docker}
	W1003 19:37:16.149238  470831 pod_ready.go:104] pod "coredns-5dd5756b68-6grkm" is not "Ready", error: <nil>
	W1003 19:37:18.649271  470831 pod_ready.go:104] pod "coredns-5dd5756b68-6grkm" is not "Ready", error: <nil>
	I1003 19:37:19.715486  469677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1003 19:37:19.715593  469677 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 19:37:19.772102  469677 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1003 19:37:19.820338  469677 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 19:37:20.371803  469677 node_ready.go:35] waiting up to 6m0s for node "no-preload-643397" to be "Ready" ...
	I1003 19:37:20.371912  469677 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1003 19:37:20.880944  469677 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-643397" context rescaled to 1 replicas
	I1003 19:37:20.986839  469677 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.166463205s)
	I1003 19:37:20.990124  469677 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1003 19:37:20.993057  469677 addons.go:514] duration metric: took 1.687963193s for enable addons: enabled=[default-storageclass storage-provisioner]
	W1003 19:37:22.376123  469677 node_ready.go:57] node "no-preload-643397" has "Ready":"False" status (will retry)
	W1003 19:37:20.649460  470831 pod_ready.go:104] pod "coredns-5dd5756b68-6grkm" is not "Ready", error: <nil>
	W1003 19:37:22.650326  470831 pod_ready.go:104] pod "coredns-5dd5756b68-6grkm" is not "Ready", error: <nil>
	W1003 19:37:25.150069  470831 pod_ready.go:104] pod "coredns-5dd5756b68-6grkm" is not "Ready", error: <nil>
	W1003 19:37:24.875623  469677 node_ready.go:57] node "no-preload-643397" has "Ready":"False" status (will retry)
	W1003 19:37:26.875771  469677 node_ready.go:57] node "no-preload-643397" has "Ready":"False" status (will retry)
	W1003 19:37:29.375746  469677 node_ready.go:57] node "no-preload-643397" has "Ready":"False" status (will retry)
	W1003 19:37:27.150205  470831 pod_ready.go:104] pod "coredns-5dd5756b68-6grkm" is not "Ready", error: <nil>
	I1003 19:37:28.649438  470831 pod_ready.go:94] pod "coredns-5dd5756b68-6grkm" is "Ready"
	I1003 19:37:28.649469  470831 pod_ready.go:86] duration metric: took 36.506186575s for pod "coredns-5dd5756b68-6grkm" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:37:28.652598  470831 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-174543" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:37:28.658917  470831 pod_ready.go:94] pod "etcd-old-k8s-version-174543" is "Ready"
	I1003 19:37:28.658946  470831 pod_ready.go:86] duration metric: took 6.321554ms for pod "etcd-old-k8s-version-174543" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:37:28.662163  470831 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-174543" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:37:28.668091  470831 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-174543" is "Ready"
	I1003 19:37:28.668117  470831 pod_ready.go:86] duration metric: took 5.928958ms for pod "kube-apiserver-old-k8s-version-174543" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:37:28.671688  470831 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-174543" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:37:28.846760  470831 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-174543" is "Ready"
	I1003 19:37:28.846792  470831 pod_ready.go:86] duration metric: took 175.076433ms for pod "kube-controller-manager-old-k8s-version-174543" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:37:29.047756  470831 pod_ready.go:83] waiting for pod "kube-proxy-v4mqk" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:37:29.448122  470831 pod_ready.go:94] pod "kube-proxy-v4mqk" is "Ready"
	I1003 19:37:29.448147  470831 pod_ready.go:86] duration metric: took 400.307649ms for pod "kube-proxy-v4mqk" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:37:29.647912  470831 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-174543" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:37:30.050088  470831 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-174543" is "Ready"
	I1003 19:37:30.050180  470831 pod_ready.go:86] duration metric: took 402.239657ms for pod "kube-scheduler-old-k8s-version-174543" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:37:30.050210  470831 pod_ready.go:40] duration metric: took 37.911945126s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1003 19:37:30.129993  470831 start.go:623] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1003 19:37:30.133282  470831 out.go:203] 
	W1003 19:37:30.136402  470831 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1003 19:37:30.139579  470831 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1003 19:37:30.142604  470831 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-174543" cluster and "default" namespace by default
	W1003 19:37:31.376152  469677 node_ready.go:57] node "no-preload-643397" has "Ready":"False" status (will retry)
	I1003 19:37:33.877493  469677 node_ready.go:49] node "no-preload-643397" is "Ready"
	I1003 19:37:33.877520  469677 node_ready.go:38] duration metric: took 13.504811463s for node "no-preload-643397" to be "Ready" ...
	I1003 19:37:33.877534  469677 api_server.go:52] waiting for apiserver process to appear ...
	I1003 19:37:33.877594  469677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 19:37:33.895506  469677 api_server.go:72] duration metric: took 14.590822912s to wait for apiserver process to appear ...
	I1003 19:37:33.895531  469677 api_server.go:88] waiting for apiserver healthz status ...
	I1003 19:37:33.895550  469677 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1003 19:37:33.909806  469677 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1003 19:37:33.910971  469677 api_server.go:141] control plane version: v1.34.1
	I1003 19:37:33.911000  469677 api_server.go:131] duration metric: took 15.46149ms to wait for apiserver health ...
	I1003 19:37:33.911010  469677 system_pods.go:43] waiting for kube-system pods to appear ...
	I1003 19:37:33.916639  469677 system_pods.go:59] 8 kube-system pods found
	I1003 19:37:33.916673  469677 system_pods.go:61] "coredns-66bc5c9577-h8n5p" [d7f4ec9d-9c68-4332-b6c7-e52f424dcd1e] Pending
	I1003 19:37:33.916680  469677 system_pods.go:61] "etcd-no-preload-643397" [642f5548-1caf-4bb4-9780-63e00e8b0a3c] Running
	I1003 19:37:33.916685  469677 system_pods.go:61] "kindnet-7zwct" [bd0ecfeb-3764-425f-b7ae-e6f5b3e161d8] Running
	I1003 19:37:33.916689  469677 system_pods.go:61] "kube-apiserver-no-preload-643397" [6e4aa6fd-218d-45ce-a0d9-a1736936d2d3] Running
	I1003 19:37:33.916694  469677 system_pods.go:61] "kube-controller-manager-no-preload-643397" [29843b74-a1d2-46af-ac5e-06f4d53a0ac4] Running
	I1003 19:37:33.916698  469677 system_pods.go:61] "kube-proxy-lcs2q" [f25c0891-1202-477f-9cc9-5e41c3f1b9fb] Running
	I1003 19:37:33.916702  469677 system_pods.go:61] "kube-scheduler-no-preload-643397" [6865d4a0-3590-465e-81e1-927d271170c0] Running
	I1003 19:37:33.916710  469677 system_pods.go:61] "storage-provisioner" [355c16e4-3158-4ffc-9379-57747ed71cca] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1003 19:37:33.916717  469677 system_pods.go:74] duration metric: took 5.701435ms to wait for pod list to return data ...
	I1003 19:37:33.916791  469677 default_sa.go:34] waiting for default service account to be created ...
	I1003 19:37:33.929062  469677 default_sa.go:45] found service account: "default"
	I1003 19:37:33.929096  469677 default_sa.go:55] duration metric: took 12.295124ms for default service account to be created ...
	I1003 19:37:33.929107  469677 system_pods.go:116] waiting for k8s-apps to be running ...
	I1003 19:37:33.935443  469677 system_pods.go:86] 8 kube-system pods found
	I1003 19:37:33.935482  469677 system_pods.go:89] "coredns-66bc5c9577-h8n5p" [d7f4ec9d-9c68-4332-b6c7-e52f424dcd1e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1003 19:37:33.935488  469677 system_pods.go:89] "etcd-no-preload-643397" [642f5548-1caf-4bb4-9780-63e00e8b0a3c] Running
	I1003 19:37:33.935536  469677 system_pods.go:89] "kindnet-7zwct" [bd0ecfeb-3764-425f-b7ae-e6f5b3e161d8] Running
	I1003 19:37:33.935550  469677 system_pods.go:89] "kube-apiserver-no-preload-643397" [6e4aa6fd-218d-45ce-a0d9-a1736936d2d3] Running
	I1003 19:37:33.935556  469677 system_pods.go:89] "kube-controller-manager-no-preload-643397" [29843b74-a1d2-46af-ac5e-06f4d53a0ac4] Running
	I1003 19:37:33.935561  469677 system_pods.go:89] "kube-proxy-lcs2q" [f25c0891-1202-477f-9cc9-5e41c3f1b9fb] Running
	I1003 19:37:33.935566  469677 system_pods.go:89] "kube-scheduler-no-preload-643397" [6865d4a0-3590-465e-81e1-927d271170c0] Running
	I1003 19:37:33.935579  469677 system_pods.go:89] "storage-provisioner" [355c16e4-3158-4ffc-9379-57747ed71cca] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1003 19:37:33.935626  469677 retry.go:31] will retry after 295.140191ms: missing components: kube-dns
	I1003 19:37:34.235258  469677 system_pods.go:86] 8 kube-system pods found
	I1003 19:37:34.235294  469677 system_pods.go:89] "coredns-66bc5c9577-h8n5p" [d7f4ec9d-9c68-4332-b6c7-e52f424dcd1e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1003 19:37:34.235302  469677 system_pods.go:89] "etcd-no-preload-643397" [642f5548-1caf-4bb4-9780-63e00e8b0a3c] Running
	I1003 19:37:34.235309  469677 system_pods.go:89] "kindnet-7zwct" [bd0ecfeb-3764-425f-b7ae-e6f5b3e161d8] Running
	I1003 19:37:34.235339  469677 system_pods.go:89] "kube-apiserver-no-preload-643397" [6e4aa6fd-218d-45ce-a0d9-a1736936d2d3] Running
	I1003 19:37:34.235353  469677 system_pods.go:89] "kube-controller-manager-no-preload-643397" [29843b74-a1d2-46af-ac5e-06f4d53a0ac4] Running
	I1003 19:37:34.235358  469677 system_pods.go:89] "kube-proxy-lcs2q" [f25c0891-1202-477f-9cc9-5e41c3f1b9fb] Running
	I1003 19:37:34.235362  469677 system_pods.go:89] "kube-scheduler-no-preload-643397" [6865d4a0-3590-465e-81e1-927d271170c0] Running
	I1003 19:37:34.235368  469677 system_pods.go:89] "storage-provisioner" [355c16e4-3158-4ffc-9379-57747ed71cca] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1003 19:37:34.235401  469677 retry.go:31] will retry after 248.460437ms: missing components: kube-dns
	I1003 19:37:34.489309  469677 system_pods.go:86] 8 kube-system pods found
	I1003 19:37:34.489347  469677 system_pods.go:89] "coredns-66bc5c9577-h8n5p" [d7f4ec9d-9c68-4332-b6c7-e52f424dcd1e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1003 19:37:34.489354  469677 system_pods.go:89] "etcd-no-preload-643397" [642f5548-1caf-4bb4-9780-63e00e8b0a3c] Running
	I1003 19:37:34.489361  469677 system_pods.go:89] "kindnet-7zwct" [bd0ecfeb-3764-425f-b7ae-e6f5b3e161d8] Running
	I1003 19:37:34.489385  469677 system_pods.go:89] "kube-apiserver-no-preload-643397" [6e4aa6fd-218d-45ce-a0d9-a1736936d2d3] Running
	I1003 19:37:34.489390  469677 system_pods.go:89] "kube-controller-manager-no-preload-643397" [29843b74-a1d2-46af-ac5e-06f4d53a0ac4] Running
	I1003 19:37:34.489395  469677 system_pods.go:89] "kube-proxy-lcs2q" [f25c0891-1202-477f-9cc9-5e41c3f1b9fb] Running
	I1003 19:37:34.489404  469677 system_pods.go:89] "kube-scheduler-no-preload-643397" [6865d4a0-3590-465e-81e1-927d271170c0] Running
	I1003 19:37:34.489412  469677 system_pods.go:89] "storage-provisioner" [355c16e4-3158-4ffc-9379-57747ed71cca] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1003 19:37:34.489427  469677 retry.go:31] will retry after 349.773107ms: missing components: kube-dns
	I1003 19:37:34.842556  469677 system_pods.go:86] 8 kube-system pods found
	I1003 19:37:34.842590  469677 system_pods.go:89] "coredns-66bc5c9577-h8n5p" [d7f4ec9d-9c68-4332-b6c7-e52f424dcd1e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1003 19:37:34.842597  469677 system_pods.go:89] "etcd-no-preload-643397" [642f5548-1caf-4bb4-9780-63e00e8b0a3c] Running
	I1003 19:37:34.842604  469677 system_pods.go:89] "kindnet-7zwct" [bd0ecfeb-3764-425f-b7ae-e6f5b3e161d8] Running
	I1003 19:37:34.842609  469677 system_pods.go:89] "kube-apiserver-no-preload-643397" [6e4aa6fd-218d-45ce-a0d9-a1736936d2d3] Running
	I1003 19:37:34.842617  469677 system_pods.go:89] "kube-controller-manager-no-preload-643397" [29843b74-a1d2-46af-ac5e-06f4d53a0ac4] Running
	I1003 19:37:34.842621  469677 system_pods.go:89] "kube-proxy-lcs2q" [f25c0891-1202-477f-9cc9-5e41c3f1b9fb] Running
	I1003 19:37:34.842632  469677 system_pods.go:89] "kube-scheduler-no-preload-643397" [6865d4a0-3590-465e-81e1-927d271170c0] Running
	I1003 19:37:34.842638  469677 system_pods.go:89] "storage-provisioner" [355c16e4-3158-4ffc-9379-57747ed71cca] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1003 19:37:34.842653  469677 retry.go:31] will retry after 478.014809ms: missing components: kube-dns
	I1003 19:37:35.324852  469677 system_pods.go:86] 8 kube-system pods found
	I1003 19:37:35.324885  469677 system_pods.go:89] "coredns-66bc5c9577-h8n5p" [d7f4ec9d-9c68-4332-b6c7-e52f424dcd1e] Running
	I1003 19:37:35.324892  469677 system_pods.go:89] "etcd-no-preload-643397" [642f5548-1caf-4bb4-9780-63e00e8b0a3c] Running
	I1003 19:37:35.324897  469677 system_pods.go:89] "kindnet-7zwct" [bd0ecfeb-3764-425f-b7ae-e6f5b3e161d8] Running
	I1003 19:37:35.324905  469677 system_pods.go:89] "kube-apiserver-no-preload-643397" [6e4aa6fd-218d-45ce-a0d9-a1736936d2d3] Running
	I1003 19:37:35.324940  469677 system_pods.go:89] "kube-controller-manager-no-preload-643397" [29843b74-a1d2-46af-ac5e-06f4d53a0ac4] Running
	I1003 19:37:35.324953  469677 system_pods.go:89] "kube-proxy-lcs2q" [f25c0891-1202-477f-9cc9-5e41c3f1b9fb] Running
	I1003 19:37:35.324958  469677 system_pods.go:89] "kube-scheduler-no-preload-643397" [6865d4a0-3590-465e-81e1-927d271170c0] Running
	I1003 19:37:35.324962  469677 system_pods.go:89] "storage-provisioner" [355c16e4-3158-4ffc-9379-57747ed71cca] Running
	I1003 19:37:35.324969  469677 system_pods.go:126] duration metric: took 1.395856253s to wait for k8s-apps to be running ...
	I1003 19:37:35.324982  469677 system_svc.go:44] waiting for kubelet service to be running ....
	I1003 19:37:35.325049  469677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 19:37:35.338955  469677 system_svc.go:56] duration metric: took 13.963268ms WaitForService to wait for kubelet
	I1003 19:37:35.339034  469677 kubeadm.go:586] duration metric: took 16.034355182s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 19:37:35.339070  469677 node_conditions.go:102] verifying NodePressure condition ...
	I1003 19:37:35.342074  469677 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1003 19:37:35.342109  469677 node_conditions.go:123] node cpu capacity is 2
	I1003 19:37:35.342126  469677 node_conditions.go:105] duration metric: took 3.043245ms to run NodePressure ...
	I1003 19:37:35.342138  469677 start.go:241] waiting for startup goroutines ...
	I1003 19:37:35.342146  469677 start.go:246] waiting for cluster config update ...
	I1003 19:37:35.342158  469677 start.go:255] writing updated cluster config ...
	I1003 19:37:35.342457  469677 ssh_runner.go:195] Run: rm -f paused
	I1003 19:37:35.346951  469677 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1003 19:37:35.350667  469677 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-h8n5p" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:37:35.355997  469677 pod_ready.go:94] pod "coredns-66bc5c9577-h8n5p" is "Ready"
	I1003 19:37:35.356030  469677 pod_ready.go:86] duration metric: took 5.334275ms for pod "coredns-66bc5c9577-h8n5p" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:37:35.358383  469677 pod_ready.go:83] waiting for pod "etcd-no-preload-643397" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:37:35.363206  469677 pod_ready.go:94] pod "etcd-no-preload-643397" is "Ready"
	I1003 19:37:35.363231  469677 pod_ready.go:86] duration metric: took 4.821224ms for pod "etcd-no-preload-643397" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:37:35.366173  469677 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-643397" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:37:35.370975  469677 pod_ready.go:94] pod "kube-apiserver-no-preload-643397" is "Ready"
	I1003 19:37:35.371012  469677 pod_ready.go:86] duration metric: took 4.811206ms for pod "kube-apiserver-no-preload-643397" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:37:35.375547  469677 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-643397" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:37:35.751762  469677 pod_ready.go:94] pod "kube-controller-manager-no-preload-643397" is "Ready"
	I1003 19:37:35.751787  469677 pod_ready.go:86] duration metric: took 376.212677ms for pod "kube-controller-manager-no-preload-643397" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:37:35.951184  469677 pod_ready.go:83] waiting for pod "kube-proxy-lcs2q" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:37:36.350602  469677 pod_ready.go:94] pod "kube-proxy-lcs2q" is "Ready"
	I1003 19:37:36.350635  469677 pod_ready.go:86] duration metric: took 399.421484ms for pod "kube-proxy-lcs2q" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:37:36.550913  469677 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-643397" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:37:36.951534  469677 pod_ready.go:94] pod "kube-scheduler-no-preload-643397" is "Ready"
	I1003 19:37:36.951574  469677 pod_ready.go:86] duration metric: took 400.633013ms for pod "kube-scheduler-no-preload-643397" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:37:36.951587  469677 pod_ready.go:40] duration metric: took 1.604603534s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1003 19:37:37.024926  469677 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1003 19:37:37.028838  469677 out.go:179] * Done! kubectl is now configured to use "no-preload-643397" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 03 19:37:29 old-k8s-version-174543 crio[653]: time="2025-10-03T19:37:29.696899174Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 03 19:37:29 old-k8s-version-174543 crio[653]: time="2025-10-03T19:37:29.702927415Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 03 19:37:29 old-k8s-version-174543 crio[653]: time="2025-10-03T19:37:29.702961992Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 03 19:37:29 old-k8s-version-174543 crio[653]: time="2025-10-03T19:37:29.702983391Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 03 19:37:29 old-k8s-version-174543 crio[653]: time="2025-10-03T19:37:29.706498799Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 03 19:37:29 old-k8s-version-174543 crio[653]: time="2025-10-03T19:37:29.706530972Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 03 19:37:29 old-k8s-version-174543 crio[653]: time="2025-10-03T19:37:29.706552101Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 03 19:37:29 old-k8s-version-174543 crio[653]: time="2025-10-03T19:37:29.70958829Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 03 19:37:29 old-k8s-version-174543 crio[653]: time="2025-10-03T19:37:29.709624032Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 03 19:37:29 old-k8s-version-174543 crio[653]: time="2025-10-03T19:37:29.709649123Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 03 19:37:29 old-k8s-version-174543 crio[653]: time="2025-10-03T19:37:29.713189779Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 03 19:37:29 old-k8s-version-174543 crio[653]: time="2025-10-03T19:37:29.713222403Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 03 19:37:36 old-k8s-version-174543 crio[653]: time="2025-10-03T19:37:36.373425097Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=4a773727-58fc-4840-bf48-3138bc9db99e name=/runtime.v1.ImageService/ImageStatus
	Oct 03 19:37:36 old-k8s-version-174543 crio[653]: time="2025-10-03T19:37:36.374314784Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=c887ea95-2152-4339-aa97-887acb0a9f2a name=/runtime.v1.ImageService/ImageStatus
	Oct 03 19:37:36 old-k8s-version-174543 crio[653]: time="2025-10-03T19:37:36.375429902Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vfkv8/dashboard-metrics-scraper" id=8c17fc94-f414-425b-91f5-8801aaff294a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:37:36 old-k8s-version-174543 crio[653]: time="2025-10-03T19:37:36.375634688Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 19:37:36 old-k8s-version-174543 crio[653]: time="2025-10-03T19:37:36.384517824Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 19:37:36 old-k8s-version-174543 crio[653]: time="2025-10-03T19:37:36.385151565Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 19:37:36 old-k8s-version-174543 crio[653]: time="2025-10-03T19:37:36.414334482Z" level=info msg="Created container c2d2e81f1c95c24f945e4ca4a6f6e6308d203a2030802e620a0adb06b519a7d2: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vfkv8/dashboard-metrics-scraper" id=8c17fc94-f414-425b-91f5-8801aaff294a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:37:36 old-k8s-version-174543 crio[653]: time="2025-10-03T19:37:36.415434231Z" level=info msg="Starting container: c2d2e81f1c95c24f945e4ca4a6f6e6308d203a2030802e620a0adb06b519a7d2" id=713b1eb0-4bb0-4111-8ad3-5d0da382113b name=/runtime.v1.RuntimeService/StartContainer
	Oct 03 19:37:36 old-k8s-version-174543 crio[653]: time="2025-10-03T19:37:36.417256757Z" level=info msg="Started container" PID=1703 containerID=c2d2e81f1c95c24f945e4ca4a6f6e6308d203a2030802e620a0adb06b519a7d2 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vfkv8/dashboard-metrics-scraper id=713b1eb0-4bb0-4111-8ad3-5d0da382113b name=/runtime.v1.RuntimeService/StartContainer sandboxID=30463e946e653a6d9481df30b6a6f942304353af5b615475044b4ca1f702db33
	Oct 03 19:37:36 old-k8s-version-174543 conmon[1699]: conmon c2d2e81f1c95c24f945e <ninfo>: container 1703 exited with status 1
	Oct 03 19:37:36 old-k8s-version-174543 crio[653]: time="2025-10-03T19:37:36.7780258Z" level=info msg="Removing container: 9641d990cd3d20c343b9117d55b8144f7a0bcf421422c6cb22409e21e8da9cf7" id=5ea0def3-0f3c-466e-9292-5cb80b4ab322 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 03 19:37:36 old-k8s-version-174543 crio[653]: time="2025-10-03T19:37:36.785184414Z" level=info msg="Error loading conmon cgroup of container 9641d990cd3d20c343b9117d55b8144f7a0bcf421422c6cb22409e21e8da9cf7: cgroup deleted" id=5ea0def3-0f3c-466e-9292-5cb80b4ab322 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 03 19:37:36 old-k8s-version-174543 crio[653]: time="2025-10-03T19:37:36.78863057Z" level=info msg="Removed container 9641d990cd3d20c343b9117d55b8144f7a0bcf421422c6cb22409e21e8da9cf7: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vfkv8/dashboard-metrics-scraper" id=5ea0def3-0f3c-466e-9292-5cb80b4ab322 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	c2d2e81f1c95c       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           9 seconds ago        Exited              dashboard-metrics-scraper   2                   30463e946e653       dashboard-metrics-scraper-5f989dc9cf-vfkv8       kubernetes-dashboard
	299e25627798d       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           24 seconds ago       Running             storage-provisioner         2                   ac2360cd7dfe9       storage-provisioner                              kube-system
	d250f6446c88c       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   36 seconds ago       Running             kubernetes-dashboard        0                   bf45efee6adea       kubernetes-dashboard-8694d4445c-4tgnz            kubernetes-dashboard
	edf79b93e4b38       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           56 seconds ago       Running             coredns                     1                   655d88fe34d01       coredns-5dd5756b68-6grkm                         kube-system
	ed93641b7305e       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           56 seconds ago       Exited              storage-provisioner         1                   ac2360cd7dfe9       storage-provisioner                              kube-system
	8546643fba7e5       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           56 seconds ago       Running             busybox                     1                   ac73651a8544b       busybox                                          default
	b0164ebd7fa62       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           56 seconds ago       Running             kindnet-cni                 1                   0fbb63c13f83e       kindnet-rwdd6                                    kube-system
	07e35fb642fb1       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           56 seconds ago       Running             kube-proxy                  1                   9ce34b7484cc6       kube-proxy-v4mqk                                 kube-system
	9d777d7ca3f3a       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           About a minute ago   Running             kube-apiserver              1                   d4ad0dd3afe72       kube-apiserver-old-k8s-version-174543            kube-system
	fc8be4f0125f4       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           About a minute ago   Running             kube-scheduler              1                   4dfba0ba15d84       kube-scheduler-old-k8s-version-174543            kube-system
	5178fc63373a8       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           About a minute ago   Running             etcd                        1                   b445848275834       etcd-old-k8s-version-174543                      kube-system
	62ef8d10feba1       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           About a minute ago   Running             kube-controller-manager     1                   a25cd200cb3dd       kube-controller-manager-old-k8s-version-174543   kube-system
	
	
	==> coredns [edf79b93e4b38e2ee91c81e9e314756148e9674922f93889028ee8c7ecc4ef9d] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:60799 - 49190 "HINFO IN 1614990082667808264.2296963525466293270. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020841481s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-174543
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-174543
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a43873c79fc22f8b1ccd29d3dfa635d392b09335
	                    minikube.k8s.io/name=old-k8s-version-174543
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_03T19_35_35_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 03 Oct 2025 19:35:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-174543
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 03 Oct 2025 19:37:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 03 Oct 2025 19:37:39 +0000   Fri, 03 Oct 2025 19:35:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 03 Oct 2025 19:37:39 +0000   Fri, 03 Oct 2025 19:35:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 03 Oct 2025 19:37:39 +0000   Fri, 03 Oct 2025 19:35:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 03 Oct 2025 19:37:39 +0000   Fri, 03 Oct 2025 19:36:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-174543
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 781e6cd6dcfe4176a1510d7d87dc61ef
	  System UUID:                d17a7f15-898a-43d2-a8ef-eaca6b0b9649
	  Boot ID:                    3762136e-8bec-4104-a5cb-0b1976f6048e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kube-system                 coredns-5dd5756b68-6grkm                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     119s
	  kube-system                 etcd-old-k8s-version-174543                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m12s
	  kube-system                 kindnet-rwdd6                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      119s
	  kube-system                 kube-apiserver-old-k8s-version-174543             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m14s
	  kube-system                 kube-controller-manager-old-k8s-version-174543    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m12s
	  kube-system                 kube-proxy-v4mqk                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-scheduler-old-k8s-version-174543             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m12s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         117s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-vfkv8        0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-4tgnz             0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 117s               kube-proxy       
	  Normal  Starting                 54s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m12s              kubelet          Node old-k8s-version-174543 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m12s              kubelet          Node old-k8s-version-174543 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m12s              kubelet          Node old-k8s-version-174543 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m12s              kubelet          Starting kubelet.
	  Normal  RegisteredNode           2m                 node-controller  Node old-k8s-version-174543 event: Registered Node old-k8s-version-174543 in Controller
	  Normal  NodeReady                105s               kubelet          Node old-k8s-version-174543 status is now: NodeReady
	  Normal  Starting                 67s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  67s (x8 over 67s)  kubelet          Node old-k8s-version-174543 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    67s (x8 over 67s)  kubelet          Node old-k8s-version-174543 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     67s (x8 over 67s)  kubelet          Node old-k8s-version-174543 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           45s                node-controller  Node old-k8s-version-174543 event: Registered Node old-k8s-version-174543 in Controller
	
	
	==> dmesg <==
	[Oct 3 19:07] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:08] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:09] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:10] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:11] overlayfs: idmapped layers are currently not supported
	[  +4.287643] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:12] overlayfs: idmapped layers are currently not supported
	[ +24.839009] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:13] overlayfs: idmapped layers are currently not supported
	[ +26.493253] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:15] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:16] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:17] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000010] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Oct 3 19:18] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:20] overlayfs: idmapped layers are currently not supported
	[ +32.018892] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:22] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:24] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:26] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:32] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:34] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:35] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:36] overlayfs: idmapped layers are currently not supported
	[  +4.740983] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [5178fc63373a85b7ab0aa3b1194bd3b13ba6e413c7f9fcf141e7a055caeea3d9] <==
	{"level":"info","ts":"2025-10-03T19:36:40.953045Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-03T19:36:40.928856Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-03T19:36:40.953085Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-03T19:36:40.928942Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-10-03T19:36:40.929244Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-10-03T19:36:40.953206Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-10-03T19:36:40.953287Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-03T19:36:40.953313Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-03T19:36:40.929331Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-03T19:36:40.968516Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-03T19:36:40.968563Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-03T19:36:41.968785Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-03T19:36:41.968903Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-03T19:36:41.968958Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-10-03T19:36:41.969Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-10-03T19:36:41.969033Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-03T19:36:41.96907Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-10-03T19:36:41.969098Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-03T19:36:41.978279Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-174543 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-03T19:36:41.978481Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-03T19:36:41.979529Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-10-03T19:36:41.996219Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-03T19:36:41.997372Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-03T19:36:41.997501Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-03T19:36:41.997535Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 19:37:46 up  2:20,  0 user,  load average: 5.21, 2.57, 2.05
	Linux old-k8s-version-174543 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b0164ebd7fa623d22d654d8c31fba34f430360c496ed08d6a01ebbe6ad7fa8fd] <==
	I1003 19:36:49.423918       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1003 19:36:49.430516       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1003 19:36:49.430724       1 main.go:148] setting mtu 1500 for CNI 
	I1003 19:36:49.430766       1 main.go:178] kindnetd IP family: "ipv4"
	I1003 19:36:49.430798       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-03T19:36:49Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1003 19:36:49.694805       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1003 19:36:49.705061       1 controller.go:381] "Waiting for informer caches to sync"
	I1003 19:36:49.705186       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1003 19:36:49.706084       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1003 19:37:19.695333       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1003 19:37:19.705855       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1003 19:37:19.706072       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1003 19:37:19.706200       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1003 19:37:21.305902       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1003 19:37:21.305992       1 metrics.go:72] Registering metrics
	I1003 19:37:21.306087       1 controller.go:711] "Syncing nftables rules"
	I1003 19:37:29.695171       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1003 19:37:29.696577       1 main.go:301] handling current node
	I1003 19:37:39.694408       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1003 19:37:39.694442       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9d777d7ca3f3aae2a67724d1a6f8ab7dbc9844b33527c107ab163508dd940d95] <==
	I1003 19:36:47.918105       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1003 19:36:47.939329       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1003 19:36:47.939695       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1003 19:36:47.939716       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1003 19:36:47.939836       1 shared_informer.go:318] Caches are synced for configmaps
	I1003 19:36:47.939913       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1003 19:36:47.961522       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1003 19:36:47.962155       1 aggregator.go:166] initial CRD sync complete...
	I1003 19:36:47.962178       1 autoregister_controller.go:141] Starting autoregister controller
	I1003 19:36:47.962185       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1003 19:36:47.962192       1 cache.go:39] Caches are synced for autoregister controller
	I1003 19:36:47.993101       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1003 19:36:47.996491       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	E1003 19:36:48.135017       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1003 19:36:48.449046       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1003 19:36:51.856824       1 controller.go:624] quota admission added evaluator for: namespaces
	I1003 19:36:51.921354       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1003 19:36:51.955474       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1003 19:36:51.967957       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1003 19:36:51.981401       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1003 19:36:52.042140       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.61.42"}
	I1003 19:36:52.067205       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.37.125"}
	I1003 19:37:01.316612       1 controller.go:624] quota admission added evaluator for: endpoints
	I1003 19:37:01.404261       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1003 19:37:01.459195       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [62ef8d10feba1f56202dc665fa46660c227322fdddf49c3e984ffb9430f54164] <==
	I1003 19:37:01.442241       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="101.548µs"
	I1003 19:37:01.476952       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I1003 19:37:01.476980       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5f989dc9cf to 1"
	I1003 19:37:01.516821       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-4tgnz"
	I1003 19:37:01.522750       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-vfkv8"
	I1003 19:37:01.542452       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="66.139841ms"
	I1003 19:37:01.550547       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="73.486287ms"
	I1003 19:37:01.565678       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="15.069786ms"
	I1003 19:37:01.566038       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="288.333µs"
	I1003 19:37:01.585047       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="97.963µs"
	I1003 19:37:01.616968       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="74.462081ms"
	I1003 19:37:01.634444       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="17.415125ms"
	I1003 19:37:01.634562       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="49.946µs"
	I1003 19:37:01.749947       1 shared_informer.go:318] Caches are synced for garbage collector
	I1003 19:37:01.827864       1 shared_informer.go:318] Caches are synced for garbage collector
	I1003 19:37:01.827995       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1003 19:37:09.756607       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="47.461609ms"
	I1003 19:37:09.756685       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="45.383µs"
	I1003 19:37:15.750037       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="69.4µs"
	I1003 19:37:16.758971       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="77.4µs"
	I1003 19:37:17.755086       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="326.585µs"
	I1003 19:37:28.272829       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="9.966293ms"
	I1003 19:37:28.273870       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="47.886µs"
	I1003 19:37:37.802365       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="78.713µs"
	I1003 19:37:41.867247       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="97.233µs"
	
	
	==> kube-proxy [07e35fb642fb1060de6f5b6fe3a20dcbf4caddf1bf2630c89f54858a905f5d85] <==
	I1003 19:36:50.510482       1 server_others.go:69] "Using iptables proxy"
	I1003 19:36:51.053521       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1003 19:36:51.443230       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1003 19:36:51.461538       1 server_others.go:152] "Using iptables Proxier"
	I1003 19:36:51.461584       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1003 19:36:51.461591       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1003 19:36:51.461620       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1003 19:36:51.461811       1 server.go:846] "Version info" version="v1.28.0"
	I1003 19:36:51.461820       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1003 19:36:51.463226       1 config.go:188] "Starting service config controller"
	I1003 19:36:51.463235       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1003 19:36:51.463252       1 config.go:97] "Starting endpoint slice config controller"
	I1003 19:36:51.463255       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1003 19:36:51.463607       1 config.go:315] "Starting node config controller"
	I1003 19:36:51.463613       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1003 19:36:51.568501       1 shared_informer.go:318] Caches are synced for node config
	I1003 19:36:51.571213       1 shared_informer.go:318] Caches are synced for service config
	I1003 19:36:51.571227       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [fc8be4f0125f487dca2dc76dd1220ac22ffcd4a1e02920fcc8ee321799717ac2] <==
	I1003 19:36:46.083156       1 serving.go:348] Generated self-signed cert in-memory
	I1003 19:36:51.588827       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1003 19:36:51.588854       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1003 19:36:51.598231       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1003 19:36:51.598318       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1003 19:36:51.598331       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1003 19:36:51.598346       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1003 19:36:51.599999       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1003 19:36:51.600011       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1003 19:36:51.600027       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1003 19:36:51.600031       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1003 19:36:51.699261       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1003 19:36:51.700541       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1003 19:36:51.700621       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 03 19:37:01 old-k8s-version-174543 kubelet[780]: I1003 19:37:01.649867     780 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/cc2e663a-4e2d-43a5-8475-8e8990ff0576-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-4tgnz\" (UID: \"cc2e663a-4e2d-43a5-8475-8e8990ff0576\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-4tgnz"
	Oct 03 19:37:01 old-k8s-version-174543 kubelet[780]: I1003 19:37:01.649969     780 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldflw\" (UniqueName: \"kubernetes.io/projected/cd8f6ac2-0026-47a9-a2dd-63a0e5a68a01-kube-api-access-ldflw\") pod \"dashboard-metrics-scraper-5f989dc9cf-vfkv8\" (UID: \"cd8f6ac2-0026-47a9-a2dd-63a0e5a68a01\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vfkv8"
	Oct 03 19:37:01 old-k8s-version-174543 kubelet[780]: I1003 19:37:01.650057     780 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vthz\" (UniqueName: \"kubernetes.io/projected/cc2e663a-4e2d-43a5-8475-8e8990ff0576-kube-api-access-6vthz\") pod \"kubernetes-dashboard-8694d4445c-4tgnz\" (UID: \"cc2e663a-4e2d-43a5-8475-8e8990ff0576\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-4tgnz"
	Oct 03 19:37:01 old-k8s-version-174543 kubelet[780]: W1003 19:37:01.901541     780 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/e396cf711cf72d67a3eb0308bfe582b67073d4549b3bd8af7083d99767f74cff/crio-30463e946e653a6d9481df30b6a6f942304353af5b615475044b4ca1f702db33 WatchSource:0}: Error finding container 30463e946e653a6d9481df30b6a6f942304353af5b615475044b4ca1f702db33: Status 404 returned error can't find the container with id 30463e946e653a6d9481df30b6a6f942304353af5b615475044b4ca1f702db33
	Oct 03 19:37:01 old-k8s-version-174543 kubelet[780]: W1003 19:37:01.904698     780 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/e396cf711cf72d67a3eb0308bfe582b67073d4549b3bd8af7083d99767f74cff/crio-bf45efee6adea7c85d48e135973f20b098923b9f1d3bfd414a2e11fa3ad3bef0 WatchSource:0}: Error finding container bf45efee6adea7c85d48e135973f20b098923b9f1d3bfd414a2e11fa3ad3bef0: Status 404 returned error can't find the container with id bf45efee6adea7c85d48e135973f20b098923b9f1d3bfd414a2e11fa3ad3bef0
	Oct 03 19:37:09 old-k8s-version-174543 kubelet[780]: I1003 19:37:09.726234     780 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-4tgnz" podStartSLOduration=1.560199058 podCreationTimestamp="2025-10-03 19:37:01 +0000 UTC" firstStartedPulling="2025-10-03 19:37:01.912855405 +0000 UTC m=+22.896315977" lastFinishedPulling="2025-10-03 19:37:09.07882354 +0000 UTC m=+30.062284113" observedRunningTime="2025-10-03 19:37:09.70652637 +0000 UTC m=+30.689986943" watchObservedRunningTime="2025-10-03 19:37:09.726167194 +0000 UTC m=+30.709627767"
	Oct 03 19:37:15 old-k8s-version-174543 kubelet[780]: I1003 19:37:15.723188     780 scope.go:117] "RemoveContainer" containerID="f973d70d4e5266065ddc121570af6d59a783002e373b03da02c022c8aaafc71b"
	Oct 03 19:37:16 old-k8s-version-174543 kubelet[780]: I1003 19:37:16.724363     780 scope.go:117] "RemoveContainer" containerID="9641d990cd3d20c343b9117d55b8144f7a0bcf421422c6cb22409e21e8da9cf7"
	Oct 03 19:37:16 old-k8s-version-174543 kubelet[780]: I1003 19:37:16.725545     780 scope.go:117] "RemoveContainer" containerID="f973d70d4e5266065ddc121570af6d59a783002e373b03da02c022c8aaafc71b"
	Oct 03 19:37:16 old-k8s-version-174543 kubelet[780]: E1003 19:37:16.731628     780 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-vfkv8_kubernetes-dashboard(cd8f6ac2-0026-47a9-a2dd-63a0e5a68a01)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vfkv8" podUID="cd8f6ac2-0026-47a9-a2dd-63a0e5a68a01"
	Oct 03 19:37:17 old-k8s-version-174543 kubelet[780]: I1003 19:37:17.727652     780 scope.go:117] "RemoveContainer" containerID="9641d990cd3d20c343b9117d55b8144f7a0bcf421422c6cb22409e21e8da9cf7"
	Oct 03 19:37:17 old-k8s-version-174543 kubelet[780]: E1003 19:37:17.727999     780 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-vfkv8_kubernetes-dashboard(cd8f6ac2-0026-47a9-a2dd-63a0e5a68a01)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vfkv8" podUID="cd8f6ac2-0026-47a9-a2dd-63a0e5a68a01"
	Oct 03 19:37:20 old-k8s-version-174543 kubelet[780]: I1003 19:37:20.734629     780 scope.go:117] "RemoveContainer" containerID="ed93641b7305ecc78cf05b71981a9b30e56f9dd16df2e6eb2b65f4cc3ef9c10b"
	Oct 03 19:37:21 old-k8s-version-174543 kubelet[780]: I1003 19:37:21.850559     780 scope.go:117] "RemoveContainer" containerID="9641d990cd3d20c343b9117d55b8144f7a0bcf421422c6cb22409e21e8da9cf7"
	Oct 03 19:37:21 old-k8s-version-174543 kubelet[780]: E1003 19:37:21.852475     780 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-vfkv8_kubernetes-dashboard(cd8f6ac2-0026-47a9-a2dd-63a0e5a68a01)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vfkv8" podUID="cd8f6ac2-0026-47a9-a2dd-63a0e5a68a01"
	Oct 03 19:37:36 old-k8s-version-174543 kubelet[780]: I1003 19:37:36.372813     780 scope.go:117] "RemoveContainer" containerID="9641d990cd3d20c343b9117d55b8144f7a0bcf421422c6cb22409e21e8da9cf7"
	Oct 03 19:37:36 old-k8s-version-174543 kubelet[780]: I1003 19:37:36.776906     780 scope.go:117] "RemoveContainer" containerID="9641d990cd3d20c343b9117d55b8144f7a0bcf421422c6cb22409e21e8da9cf7"
	Oct 03 19:37:37 old-k8s-version-174543 kubelet[780]: I1003 19:37:37.782961     780 scope.go:117] "RemoveContainer" containerID="c2d2e81f1c95c24f945e4ca4a6f6e6308d203a2030802e620a0adb06b519a7d2"
	Oct 03 19:37:37 old-k8s-version-174543 kubelet[780]: E1003 19:37:37.783241     780 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-vfkv8_kubernetes-dashboard(cd8f6ac2-0026-47a9-a2dd-63a0e5a68a01)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vfkv8" podUID="cd8f6ac2-0026-47a9-a2dd-63a0e5a68a01"
	Oct 03 19:37:41 old-k8s-version-174543 kubelet[780]: I1003 19:37:41.850666     780 scope.go:117] "RemoveContainer" containerID="c2d2e81f1c95c24f945e4ca4a6f6e6308d203a2030802e620a0adb06b519a7d2"
	Oct 03 19:37:41 old-k8s-version-174543 kubelet[780]: E1003 19:37:41.851024     780 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-vfkv8_kubernetes-dashboard(cd8f6ac2-0026-47a9-a2dd-63a0e5a68a01)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vfkv8" podUID="cd8f6ac2-0026-47a9-a2dd-63a0e5a68a01"
	Oct 03 19:37:42 old-k8s-version-174543 kubelet[780]: I1003 19:37:42.389855     780 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 03 19:37:42 old-k8s-version-174543 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 03 19:37:42 old-k8s-version-174543 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 03 19:37:42 old-k8s-version-174543 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [d250f6446c88cc68c5a3d4d9876c5bdef89e65ab6fd74df4fbd79456c956c5d8] <==
	2025/10/03 19:37:09 Starting overwatch
	2025/10/03 19:37:09 Using namespace: kubernetes-dashboard
	2025/10/03 19:37:09 Using in-cluster config to connect to apiserver
	2025/10/03 19:37:09 Using secret token for csrf signing
	2025/10/03 19:37:09 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/03 19:37:09 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/03 19:37:09 Successful initial request to the apiserver, version: v1.28.0
	2025/10/03 19:37:09 Generating JWE encryption key
	2025/10/03 19:37:09 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/03 19:37:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/03 19:37:09 Initializing JWE encryption key from synchronized object
	2025/10/03 19:37:09 Creating in-cluster Sidecar client
	2025/10/03 19:37:09 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/03 19:37:09 Serving insecurely on HTTP port: 9090
	2025/10/03 19:37:39 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [299e25627798dd200810afddc280b9b6853cae4ac0ac3aba81703a80b719f759] <==
	I1003 19:37:20.890965       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1003 19:37:20.932009       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1003 19:37:20.932112       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1003 19:37:38.334583       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1003 19:37:38.334999       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"dad5d048-770e-49bf-b234-9f07728495ef", APIVersion:"v1", ResourceVersion:"624", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-174543_b17a2419-e152-47d5-8985-5f3c7cfff74a became leader
	I1003 19:37:38.335736       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-174543_b17a2419-e152-47d5-8985-5f3c7cfff74a!
	I1003 19:37:38.447796       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-174543_b17a2419-e152-47d5-8985-5f3c7cfff74a!
	
	
	==> storage-provisioner [ed93641b7305ecc78cf05b71981a9b30e56f9dd16df2e6eb2b65f4cc3ef9c10b] <==
	I1003 19:36:49.843669       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1003 19:37:19.845634       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-174543 -n old-k8s-version-174543
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-174543 -n old-k8s-version-174543: exit status 2 (458.623114ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-174543 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-174543
helpers_test.go:243: (dbg) docker inspect old-k8s-version-174543:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e396cf711cf72d67a3eb0308bfe582b67073d4549b3bd8af7083d99767f74cff",
	        "Created": "2025-10-03T19:35:07.94543535Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 470976,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-03T19:36:30.595188821Z",
	            "FinishedAt": "2025-10-03T19:36:29.589392196Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/e396cf711cf72d67a3eb0308bfe582b67073d4549b3bd8af7083d99767f74cff/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e396cf711cf72d67a3eb0308bfe582b67073d4549b3bd8af7083d99767f74cff/hostname",
	        "HostsPath": "/var/lib/docker/containers/e396cf711cf72d67a3eb0308bfe582b67073d4549b3bd8af7083d99767f74cff/hosts",
	        "LogPath": "/var/lib/docker/containers/e396cf711cf72d67a3eb0308bfe582b67073d4549b3bd8af7083d99767f74cff/e396cf711cf72d67a3eb0308bfe582b67073d4549b3bd8af7083d99767f74cff-json.log",
	        "Name": "/old-k8s-version-174543",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-174543:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-174543",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e396cf711cf72d67a3eb0308bfe582b67073d4549b3bd8af7083d99767f74cff",
	                "LowerDir": "/var/lib/docker/overlay2/48f8d5487aa8e63c3522dc4412a644c246929812a11cb3ecb803638938d2de80-init/diff:/var/lib/docker/overlay2/87b205803817b0b71a214d995ab7e10a92033bbf72d76d6e052f1d21ccecb313/diff",
	                "MergedDir": "/var/lib/docker/overlay2/48f8d5487aa8e63c3522dc4412a644c246929812a11cb3ecb803638938d2de80/merged",
	                "UpperDir": "/var/lib/docker/overlay2/48f8d5487aa8e63c3522dc4412a644c246929812a11cb3ecb803638938d2de80/diff",
	                "WorkDir": "/var/lib/docker/overlay2/48f8d5487aa8e63c3522dc4412a644c246929812a11cb3ecb803638938d2de80/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-174543",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-174543/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-174543",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-174543",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-174543",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b24d003fd1caf21e4e07e675a9d2114babca3dd3bb9e5a164b5dbd0f97c5baf9",
	            "SandboxKey": "/var/run/docker/netns/b24d003fd1ca",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33428"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33429"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33432"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33430"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33431"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-174543": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f2:68:ca:40:c1:7e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "002964c2ebf4675c3eed6a35959bca86f080d98023eaf2d830eb21475b5fd360",
	                    "EndpointID": "4b452d495b368ceeda75fdbfb658d632c2f7c01d6f152df2b1f0e3789e647080",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-174543",
	                        "e396cf711cf7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-174543 -n old-k8s-version-174543
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-174543 -n old-k8s-version-174543: exit status 2 (414.979ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-174543 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-174543 logs -n 25: (1.707891402s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-388132 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-388132             │ jenkins │ v1.37.0 │ 03 Oct 25 19:25 UTC │                     │
	│ ssh     │ -p cilium-388132 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-388132             │ jenkins │ v1.37.0 │ 03 Oct 25 19:25 UTC │                     │
	│ ssh     │ -p cilium-388132 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-388132             │ jenkins │ v1.37.0 │ 03 Oct 25 19:25 UTC │                     │
	│ ssh     │ -p cilium-388132 sudo crio config                                                                                                                                                                                                             │ cilium-388132             │ jenkins │ v1.37.0 │ 03 Oct 25 19:25 UTC │                     │
	│ delete  │ -p cilium-388132                                                                                                                                                                                                                              │ cilium-388132             │ jenkins │ v1.37.0 │ 03 Oct 25 19:25 UTC │ 03 Oct 25 19:25 UTC │
	│ start   │ -p force-systemd-env-159095 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-159095  │ jenkins │ v1.37.0 │ 03 Oct 25 19:25 UTC │                     │
	│ ssh     │ force-systemd-flag-855981 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-855981 │ jenkins │ v1.37.0 │ 03 Oct 25 19:32 UTC │ 03 Oct 25 19:32 UTC │
	│ delete  │ -p force-systemd-flag-855981                                                                                                                                                                                                                  │ force-systemd-flag-855981 │ jenkins │ v1.37.0 │ 03 Oct 25 19:32 UTC │ 03 Oct 25 19:32 UTC │
	│ start   │ -p cert-expiration-324520 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-324520    │ jenkins │ v1.37.0 │ 03 Oct 25 19:32 UTC │ 03 Oct 25 19:33 UTC │
	│ delete  │ -p force-systemd-env-159095                                                                                                                                                                                                                   │ force-systemd-env-159095  │ jenkins │ v1.37.0 │ 03 Oct 25 19:34 UTC │ 03 Oct 25 19:34 UTC │
	│ start   │ -p cert-options-305866 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-305866       │ jenkins │ v1.37.0 │ 03 Oct 25 19:34 UTC │ 03 Oct 25 19:34 UTC │
	│ ssh     │ cert-options-305866 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-305866       │ jenkins │ v1.37.0 │ 03 Oct 25 19:34 UTC │ 03 Oct 25 19:34 UTC │
	│ ssh     │ -p cert-options-305866 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-305866       │ jenkins │ v1.37.0 │ 03 Oct 25 19:34 UTC │ 03 Oct 25 19:34 UTC │
	│ delete  │ -p cert-options-305866                                                                                                                                                                                                                        │ cert-options-305866       │ jenkins │ v1.37.0 │ 03 Oct 25 19:34 UTC │ 03 Oct 25 19:35 UTC │
	│ start   │ -p old-k8s-version-174543 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-174543    │ jenkins │ v1.37.0 │ 03 Oct 25 19:35 UTC │ 03 Oct 25 19:36 UTC │
	│ start   │ -p cert-expiration-324520 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-324520    │ jenkins │ v1.37.0 │ 03 Oct 25 19:36 UTC │ 03 Oct 25 19:36 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-174543 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-174543    │ jenkins │ v1.37.0 │ 03 Oct 25 19:36 UTC │                     │
	│ stop    │ -p old-k8s-version-174543 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-174543    │ jenkins │ v1.37.0 │ 03 Oct 25 19:36 UTC │ 03 Oct 25 19:36 UTC │
	│ delete  │ -p cert-expiration-324520                                                                                                                                                                                                                     │ cert-expiration-324520    │ jenkins │ v1.37.0 │ 03 Oct 25 19:36 UTC │ 03 Oct 25 19:36 UTC │
	│ start   │ -p no-preload-643397 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-643397         │ jenkins │ v1.37.0 │ 03 Oct 25 19:36 UTC │ 03 Oct 25 19:37 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-174543 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-174543    │ jenkins │ v1.37.0 │ 03 Oct 25 19:36 UTC │ 03 Oct 25 19:36 UTC │
	│ start   │ -p old-k8s-version-174543 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-174543    │ jenkins │ v1.37.0 │ 03 Oct 25 19:36 UTC │ 03 Oct 25 19:37 UTC │
	│ image   │ old-k8s-version-174543 image list --format=json                                                                                                                                                                                               │ old-k8s-version-174543    │ jenkins │ v1.37.0 │ 03 Oct 25 19:37 UTC │ 03 Oct 25 19:37 UTC │
	│ pause   │ -p old-k8s-version-174543 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-174543    │ jenkins │ v1.37.0 │ 03 Oct 25 19:37 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-643397 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-643397         │ jenkins │ v1.37.0 │ 03 Oct 25 19:37 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/03 19:36:30
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 19:36:30.250303  470831 out.go:360] Setting OutFile to fd 1 ...
	I1003 19:36:30.250494  470831 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 19:36:30.250523  470831 out.go:374] Setting ErrFile to fd 2...
	I1003 19:36:30.250546  470831 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 19:36:30.250819  470831 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-284583/.minikube/bin
	I1003 19:36:30.251259  470831 out.go:368] Setting JSON to false
	I1003 19:36:30.252174  470831 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8342,"bootTime":1759511849,"procs":166,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1003 19:36:30.252267  470831 start.go:140] virtualization:  
	I1003 19:36:30.257178  470831 out.go:179] * [old-k8s-version-174543] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1003 19:36:30.260325  470831 out.go:179]   - MINIKUBE_LOCATION=21625
	I1003 19:36:30.260401  470831 notify.go:220] Checking for updates...
	I1003 19:36:30.267120  470831 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 19:36:30.270199  470831 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21625-284583/kubeconfig
	I1003 19:36:30.276956  470831 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-284583/.minikube
	I1003 19:36:30.279893  470831 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1003 19:36:30.282916  470831 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 19:36:30.286374  470831 config.go:182] Loaded profile config "old-k8s-version-174543": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1003 19:36:30.289864  470831 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1003 19:36:30.292678  470831 driver.go:421] Setting default libvirt URI to qemu:///system
	I1003 19:36:30.336883  470831 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1003 19:36:30.337040  470831 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 19:36:30.414358  470831 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:46 OomKillDisable:true NGoroutines:60 SystemTime:2025-10-03 19:36:30.404346993 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1003 19:36:30.414469  470831 docker.go:318] overlay module found
	I1003 19:36:30.417827  470831 out.go:179] * Using the docker driver based on existing profile
	I1003 19:36:30.420720  470831 start.go:304] selected driver: docker
	I1003 19:36:30.420758  470831 start.go:924] validating driver "docker" against &{Name:old-k8s-version-174543 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-174543 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 19:36:30.420853  470831 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 19:36:30.421578  470831 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 19:36:30.506943  470831 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:46 OomKillDisable:true NGoroutines:60 SystemTime:2025-10-03 19:36:30.493477103 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1003 19:36:30.507327  470831 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 19:36:30.507368  470831 cni.go:84] Creating CNI manager for ""
	I1003 19:36:30.507434  470831 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 19:36:30.507477  470831 start.go:348] cluster config:
	{Name:old-k8s-version-174543 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-174543 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 19:36:30.510812  470831 out.go:179] * Starting "old-k8s-version-174543" primary control-plane node in "old-k8s-version-174543" cluster
	I1003 19:36:30.513670  470831 cache.go:123] Beginning downloading kic base image for docker with crio
	I1003 19:36:30.516637  470831 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1003 19:36:30.519439  470831 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1003 19:36:30.519507  470831 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21625-284583/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1003 19:36:30.519517  470831 cache.go:58] Caching tarball of preloaded images
	I1003 19:36:30.519513  470831 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1003 19:36:30.519599  470831 preload.go:233] Found /home/jenkins/minikube-integration/21625-284583/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1003 19:36:30.519608  470831 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1003 19:36:30.519724  470831 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/old-k8s-version-174543/config.json ...
	I1003 19:36:30.540975  470831 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1003 19:36:30.540996  470831 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1003 19:36:30.541009  470831 cache.go:232] Successfully downloaded all kic artifacts
	I1003 19:36:30.541031  470831 start.go:360] acquireMachinesLock for old-k8s-version-174543: {Name:mk19048ea0453627d87a673cd3a2fbc4574461a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 19:36:30.541081  470831 start.go:364] duration metric: took 34.183µs to acquireMachinesLock for "old-k8s-version-174543"
	I1003 19:36:30.541100  470831 start.go:96] Skipping create...Using existing machine configuration
	I1003 19:36:30.541105  470831 fix.go:54] fixHost starting: 
	I1003 19:36:30.541364  470831 cli_runner.go:164] Run: docker container inspect old-k8s-version-174543 --format={{.State.Status}}
	I1003 19:36:30.557751  470831 fix.go:112] recreateIfNeeded on old-k8s-version-174543: state=Stopped err=<nil>
	W1003 19:36:30.557780  470831 fix.go:138] unexpected machine state, will restart: <nil>
	I1003 19:36:29.888287  469677 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-643397
	
	I1003 19:36:29.888312  469677 ubuntu.go:182] provisioning hostname "no-preload-643397"
	I1003 19:36:29.888373  469677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-643397
	I1003 19:36:29.911157  469677 main.go:141] libmachine: Using SSH client type: native
	I1003 19:36:29.911451  469677 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1003 19:36:29.911465  469677 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-643397 && echo "no-preload-643397" | sudo tee /etc/hostname
	I1003 19:36:30.097224  469677 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-643397
	
	I1003 19:36:30.097314  469677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-643397
	I1003 19:36:30.129074  469677 main.go:141] libmachine: Using SSH client type: native
	I1003 19:36:30.129399  469677 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1003 19:36:30.129417  469677 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-643397' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-643397/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-643397' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 19:36:30.275239  469677 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 19:36:30.275263  469677 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21625-284583/.minikube CaCertPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21625-284583/.minikube}
	I1003 19:36:30.275285  469677 ubuntu.go:190] setting up certificates
	I1003 19:36:30.275296  469677 provision.go:84] configureAuth start
	I1003 19:36:30.275356  469677 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-643397
	I1003 19:36:30.296110  469677 provision.go:143] copyHostCerts
	I1003 19:36:30.296190  469677 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-284583/.minikube/ca.pem, removing ...
	I1003 19:36:30.296200  469677 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-284583/.minikube/ca.pem
	I1003 19:36:30.296284  469677 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21625-284583/.minikube/ca.pem (1082 bytes)
	I1003 19:36:30.296395  469677 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-284583/.minikube/cert.pem, removing ...
	I1003 19:36:30.296404  469677 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-284583/.minikube/cert.pem
	I1003 19:36:30.296438  469677 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21625-284583/.minikube/cert.pem (1123 bytes)
	I1003 19:36:30.296491  469677 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-284583/.minikube/key.pem, removing ...
	I1003 19:36:30.296496  469677 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-284583/.minikube/key.pem
	I1003 19:36:30.296519  469677 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21625-284583/.minikube/key.pem (1675 bytes)
	I1003 19:36:30.296573  469677 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21625-284583/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca-key.pem org=jenkins.no-preload-643397 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-643397]
	I1003 19:36:31.243632  469677 provision.go:177] copyRemoteCerts
	I1003 19:36:31.243707  469677 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 19:36:31.243750  469677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-643397
	I1003 19:36:31.265968  469677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/no-preload-643397/id_rsa Username:docker}
	I1003 19:36:31.367118  469677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 19:36:31.394435  469677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1003 19:36:31.426437  469677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1003 19:36:31.460100  469677 provision.go:87] duration metric: took 1.18478156s to configureAuth
	I1003 19:36:31.460175  469677 ubuntu.go:206] setting minikube options for container-runtime
	I1003 19:36:31.460399  469677 config.go:182] Loaded profile config "no-preload-643397": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 19:36:31.460582  469677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-643397
	I1003 19:36:31.483776  469677 main.go:141] libmachine: Using SSH client type: native
	I1003 19:36:31.484112  469677 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1003 19:36:31.484128  469677 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1003 19:36:31.741630  469677 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1003 19:36:31.741713  469677 machine.go:96] duration metric: took 5.061104012s to provisionDockerMachine
	I1003 19:36:31.741739  469677 client.go:171] duration metric: took 6.85414651s to LocalClient.Create
	I1003 19:36:31.741791  469677 start.go:167] duration metric: took 6.854271353s to libmachine.API.Create "no-preload-643397"
	I1003 19:36:31.741850  469677 start.go:293] postStartSetup for "no-preload-643397" (driver="docker")
	I1003 19:36:31.741878  469677 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 19:36:31.741973  469677 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 19:36:31.742040  469677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-643397
	I1003 19:36:31.759621  469677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/no-preload-643397/id_rsa Username:docker}
	I1003 19:36:31.856950  469677 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 19:36:31.860016  469677 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1003 19:36:31.860050  469677 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1003 19:36:31.860061  469677 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-284583/.minikube/addons for local assets ...
	I1003 19:36:31.860115  469677 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-284583/.minikube/files for local assets ...
	I1003 19:36:31.860195  469677 filesync.go:149] local asset: /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem -> 2864342.pem in /etc/ssl/certs
	I1003 19:36:31.860296  469677 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 19:36:31.867513  469677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem --> /etc/ssl/certs/2864342.pem (1708 bytes)
	I1003 19:36:31.885054  469677 start.go:296] duration metric: took 143.173249ms for postStartSetup
	I1003 19:36:31.885428  469677 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-643397
	I1003 19:36:31.902133  469677 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/config.json ...
	I1003 19:36:31.902412  469677 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 19:36:31.902472  469677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-643397
	I1003 19:36:31.918558  469677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/no-preload-643397/id_rsa Username:docker}
	I1003 19:36:32.012703  469677 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1003 19:36:32.018111  469677 start.go:128] duration metric: took 7.134271436s to createHost
	I1003 19:36:32.018135  469677 start.go:83] releasing machines lock for "no-preload-643397", held for 7.134409604s
	I1003 19:36:32.018208  469677 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-643397
	I1003 19:36:32.035359  469677 ssh_runner.go:195] Run: cat /version.json
	I1003 19:36:32.035416  469677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-643397
	I1003 19:36:32.035661  469677 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 19:36:32.035730  469677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-643397
	I1003 19:36:32.056813  469677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/no-preload-643397/id_rsa Username:docker}
	I1003 19:36:32.057019  469677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/no-preload-643397/id_rsa Username:docker}
	I1003 19:36:32.247781  469677 ssh_runner.go:195] Run: systemctl --version
	I1003 19:36:32.254306  469677 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1003 19:36:32.289494  469677 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1003 19:36:32.294123  469677 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 19:36:32.294252  469677 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 19:36:32.324165  469677 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1003 19:36:32.324188  469677 start.go:495] detecting cgroup driver to use...
	I1003 19:36:32.324220  469677 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1003 19:36:32.324271  469677 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 19:36:32.342515  469677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 19:36:32.355242  469677 docker.go:218] disabling cri-docker service (if available) ...
	I1003 19:36:32.355336  469677 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1003 19:36:32.373198  469677 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1003 19:36:32.393125  469677 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1003 19:36:32.514303  469677 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1003 19:36:32.631659  469677 docker.go:234] disabling docker service ...
	I1003 19:36:32.631788  469677 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1003 19:36:32.656370  469677 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1003 19:36:32.670863  469677 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1003 19:36:32.791284  469677 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1003 19:36:32.911277  469677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1003 19:36:32.924107  469677 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 19:36:32.938287  469677 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1003 19:36:32.938366  469677 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:36:32.946968  469677 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1003 19:36:32.947047  469677 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:36:32.955545  469677 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:36:32.964065  469677 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:36:32.972790  469677 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 19:36:32.980705  469677 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:36:32.989640  469677 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:36:33.004406  469677 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:36:33.016483  469677 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 19:36:33.024887  469677 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 19:36:33.032762  469677 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 19:36:33.145045  469677 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1003 19:36:33.274369  469677 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1003 19:36:33.274467  469677 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1003 19:36:33.278514  469677 start.go:563] Will wait 60s for crictl version
	I1003 19:36:33.278611  469677 ssh_runner.go:195] Run: which crictl
	I1003 19:36:33.282251  469677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1003 19:36:33.311593  469677 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1003 19:36:33.311722  469677 ssh_runner.go:195] Run: crio --version
	I1003 19:36:33.340238  469677 ssh_runner.go:195] Run: crio --version
	I1003 19:36:33.373021  469677 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1003 19:36:33.375998  469677 cli_runner.go:164] Run: docker network inspect no-preload-643397 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 19:36:33.391502  469677 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1003 19:36:33.395406  469677 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 19:36:33.405040  469677 kubeadm.go:883] updating cluster {Name:no-preload-643397 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-643397 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1003 19:36:33.405163  469677 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 19:36:33.405211  469677 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 19:36:33.431075  469677 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1003 19:36:33.431098  469677 cache_images.go:89] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1003 19:36:33.431180  469677 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 19:36:33.431390  469677 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1003 19:36:33.431484  469677 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1003 19:36:33.431563  469677 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1003 19:36:33.431666  469677 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1003 19:36:33.431762  469677 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1003 19:36:33.431843  469677 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1003 19:36:33.431979  469677 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1003 19:36:33.433411  469677 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1003 19:36:33.433668  469677 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 19:36:33.434250  469677 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1003 19:36:33.434497  469677 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1003 19:36:33.434701  469677 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1003 19:36:33.434887  469677 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1003 19:36:33.435088  469677 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1003 19:36:33.435250  469677 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1003 19:36:33.664277  469677 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.1
	I1003 19:36:33.664905  469677 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.4-0
	I1003 19:36:33.686754  469677 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.1
	I1003 19:36:33.688953  469677 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.1
	I1003 19:36:33.693910  469677 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1003 19:36:33.695245  469677 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1003 19:36:33.703603  469677 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.1
	I1003 19:36:33.727298  469677 cache_images.go:117] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196" in container runtime
	I1003 19:36:33.727341  469677 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1003 19:36:33.727413  469677 ssh_runner.go:195] Run: which crictl
	I1003 19:36:33.731888  469677 cache_images.go:117] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e" in container runtime
	I1003 19:36:33.731937  469677 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1003 19:36:33.732001  469677 ssh_runner.go:195] Run: which crictl
	I1003 19:36:33.808862  469677 cache_images.go:117] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9" in container runtime
	I1003 19:36:33.808934  469677 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1003 19:36:33.809006  469677 ssh_runner.go:195] Run: which crictl
	I1003 19:36:33.822519  469677 cache_images.go:117] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0" in container runtime
	I1003 19:36:33.822562  469677 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1003 19:36:33.822661  469677 ssh_runner.go:195] Run: which crictl
	I1003 19:36:33.826959  469677 cache_images.go:117] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc" in container runtime
	I1003 19:36:33.827026  469677 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1003 19:36:33.827082  469677 ssh_runner.go:195] Run: which crictl
	I1003 19:36:33.827187  469677 cache_images.go:117] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1003 19:36:33.827222  469677 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1003 19:36:33.827255  469677 ssh_runner.go:195] Run: which crictl
	I1003 19:36:33.829319  469677 cache_images.go:117] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a" in container runtime
	I1003 19:36:33.829388  469677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1003 19:36:33.829419  469677 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1003 19:36:33.829492  469677 ssh_runner.go:195] Run: which crictl
	I1003 19:36:33.829518  469677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1003 19:36:33.829334  469677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1003 19:36:33.836401  469677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1003 19:36:33.836515  469677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1003 19:36:33.838188  469677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1003 19:36:33.919978  469677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1003 19:36:33.920083  469677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1003 19:36:33.920154  469677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1003 19:36:33.920238  469677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1003 19:36:33.932206  469677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1003 19:36:33.932323  469677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1003 19:36:33.932391  469677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1003 19:36:34.020085  469677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1003 19:36:34.020207  469677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1003 19:36:34.020288  469677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1003 19:36:34.020365  469677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1003 19:36:34.049008  469677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1003 19:36:34.049126  469677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1003 19:36:34.049207  469677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1003 19:36:34.167904  469677 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1003 19:36:34.168055  469677 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1003 19:36:34.168144  469677 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1003 19:36:34.168224  469677 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1003 19:36:34.168292  469677 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1003 19:36:34.168427  469677 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1003 19:36:34.172013  469677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1003 19:36:34.179883  469677 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1003 19:36:34.179981  469677 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1003 19:36:34.180078  469677 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1003 19:36:34.180122  469677 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1003 19:36:34.180194  469677 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1003 19:36:34.180226  469677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (24581632 bytes)
	I1003 19:36:34.180259  469677 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1003 19:36:34.180281  469677 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1003 19:36:34.180325  469677 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1003 19:36:34.180368  469677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (98216960 bytes)
	I1003 19:36:34.180454  469677 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1003 19:36:34.180478  469677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (22790144 bytes)
	I1003 19:36:34.280256  469677 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1003 19:36:34.280295  469677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (20402176 bytes)
	I1003 19:36:34.280348  469677 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1003 19:36:34.280365  469677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (15790592 bytes)
	I1003 19:36:34.280411  469677 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1003 19:36:34.280486  469677 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1003 19:36:34.280533  469677 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1003 19:36:34.280549  469677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	W1003 19:36:34.315289  469677 ssh_runner.go:129] session error, resetting client: ssh: rejected: connect failed (open failed)
	I1003 19:36:34.315337  469677 retry.go:31] will retry after 228.546049ms: ssh: rejected: connect failed (open failed)
	I1003 19:36:34.388834  469677 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1003 19:36:34.388883  469677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (20730880 bytes)
	I1003 19:36:34.388984  469677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-643397
	I1003 19:36:34.430781  469677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/no-preload-643397/id_rsa Username:docker}
	W1003 19:36:34.646437  469677 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1003 19:36:34.646672  469677 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 19:36:30.561067  470831 out.go:252] * Restarting existing docker container for "old-k8s-version-174543" ...
	I1003 19:36:30.561167  470831 cli_runner.go:164] Run: docker start old-k8s-version-174543
	I1003 19:36:30.899786  470831 cli_runner.go:164] Run: docker container inspect old-k8s-version-174543 --format={{.State.Status}}
	I1003 19:36:30.946093  470831 kic.go:430] container "old-k8s-version-174543" state is running.
	I1003 19:36:30.946478  470831 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-174543
	I1003 19:36:30.993439  470831 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/old-k8s-version-174543/config.json ...
	I1003 19:36:30.994728  470831 machine.go:93] provisionDockerMachine start ...
	I1003 19:36:30.994803  470831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-174543
	I1003 19:36:31.031278  470831 main.go:141] libmachine: Using SSH client type: native
	I1003 19:36:31.031607  470831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I1003 19:36:31.031621  470831 main.go:141] libmachine: About to run SSH command:
	hostname
	I1003 19:36:31.032316  470831 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44486->127.0.0.1:33428: read: connection reset by peer
	I1003 19:36:34.204180  470831 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-174543
	
	I1003 19:36:34.204274  470831 ubuntu.go:182] provisioning hostname "old-k8s-version-174543"
	I1003 19:36:34.204364  470831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-174543
	I1003 19:36:34.226862  470831 main.go:141] libmachine: Using SSH client type: native
	I1003 19:36:34.227164  470831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I1003 19:36:34.227176  470831 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-174543 && echo "old-k8s-version-174543" | sudo tee /etc/hostname
	I1003 19:36:34.402266  470831 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-174543
	
	I1003 19:36:34.402352  470831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-174543
	I1003 19:36:34.438692  470831 main.go:141] libmachine: Using SSH client type: native
	I1003 19:36:34.439122  470831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I1003 19:36:34.439145  470831 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-174543' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-174543/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-174543' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 19:36:34.605174  470831 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 19:36:34.605197  470831 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21625-284583/.minikube CaCertPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21625-284583/.minikube}
	I1003 19:36:34.605215  470831 ubuntu.go:190] setting up certificates
	I1003 19:36:34.605225  470831 provision.go:84] configureAuth start
	I1003 19:36:34.605292  470831 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-174543
	I1003 19:36:34.638381  470831 provision.go:143] copyHostCerts
	I1003 19:36:34.638446  470831 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-284583/.minikube/ca.pem, removing ...
	I1003 19:36:34.638463  470831 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-284583/.minikube/ca.pem
	I1003 19:36:34.638532  470831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21625-284583/.minikube/ca.pem (1082 bytes)
	I1003 19:36:34.638627  470831 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-284583/.minikube/cert.pem, removing ...
	I1003 19:36:34.638633  470831 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-284583/.minikube/cert.pem
	I1003 19:36:34.638661  470831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21625-284583/.minikube/cert.pem (1123 bytes)
	I1003 19:36:34.638725  470831 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-284583/.minikube/key.pem, removing ...
	I1003 19:36:34.638730  470831 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-284583/.minikube/key.pem
	I1003 19:36:34.638754  470831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21625-284583/.minikube/key.pem (1675 bytes)
	I1003 19:36:34.638805  470831 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21625-284583/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-174543 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-174543]
	I1003 19:36:35.486484  470831 provision.go:177] copyRemoteCerts
	I1003 19:36:35.486873  470831 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 19:36:35.486984  470831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-174543
	I1003 19:36:35.534150  470831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/old-k8s-version-174543/id_rsa Username:docker}
	I1003 19:36:35.650048  470831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 19:36:35.691502  470831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1003 19:36:35.733348  470831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1003 19:36:35.769920  470831 provision.go:87] duration metric: took 1.164682718s to configureAuth
	I1003 19:36:35.769944  470831 ubuntu.go:206] setting minikube options for container-runtime
	I1003 19:36:35.770141  470831 config.go:182] Loaded profile config "old-k8s-version-174543": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1003 19:36:35.770244  470831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-174543
	I1003 19:36:35.790817  470831 main.go:141] libmachine: Using SSH client type: native
	I1003 19:36:35.791140  470831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I1003 19:36:35.791162  470831 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1003 19:36:36.147469  470831 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1003 19:36:36.147497  470831 machine.go:96] duration metric: took 5.152751689s to provisionDockerMachine
	I1003 19:36:36.147509  470831 start.go:293] postStartSetup for "old-k8s-version-174543" (driver="docker")
	I1003 19:36:36.147542  470831 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 19:36:36.147641  470831 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 19:36:36.147697  470831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-174543
	I1003 19:36:36.177232  470831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/old-k8s-version-174543/id_rsa Username:docker}
	I1003 19:36:36.288843  470831 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 19:36:36.292704  470831 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1003 19:36:36.292790  470831 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1003 19:36:36.292816  470831 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-284583/.minikube/addons for local assets ...
	I1003 19:36:36.292902  470831 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-284583/.minikube/files for local assets ...
	I1003 19:36:36.293042  470831 filesync.go:149] local asset: /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem -> 2864342.pem in /etc/ssl/certs
	I1003 19:36:36.293214  470831 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 19:36:36.301319  470831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem --> /etc/ssl/certs/2864342.pem (1708 bytes)
	I1003 19:36:36.333038  470831 start.go:296] duration metric: took 185.510283ms for postStartSetup
	I1003 19:36:36.333203  470831 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 19:36:36.333279  470831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-174543
	I1003 19:36:36.386053  470831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/old-k8s-version-174543/id_rsa Username:docker}
	I1003 19:36:36.497817  470831 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1003 19:36:36.504278  470831 fix.go:56] duration metric: took 5.963165639s for fixHost
	I1003 19:36:36.504310  470831 start.go:83] releasing machines lock for "old-k8s-version-174543", held for 5.963220515s
	I1003 19:36:36.504391  470831 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-174543
	I1003 19:36:36.529637  470831 ssh_runner.go:195] Run: cat /version.json
	I1003 19:36:36.529696  470831 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 19:36:36.529769  470831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-174543
	I1003 19:36:36.529698  470831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-174543
	I1003 19:36:36.561759  470831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/old-k8s-version-174543/id_rsa Username:docker}
	I1003 19:36:36.573961  470831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/old-k8s-version-174543/id_rsa Username:docker}
	I1003 19:36:36.779306  470831 ssh_runner.go:195] Run: systemctl --version
	I1003 19:36:36.786533  470831 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1003 19:36:36.832494  470831 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1003 19:36:36.837907  470831 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 19:36:36.837987  470831 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 19:36:36.847208  470831 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1003 19:36:36.847261  470831 start.go:495] detecting cgroup driver to use...
	I1003 19:36:36.847295  470831 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1003 19:36:36.847354  470831 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 19:36:36.865816  470831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 19:36:36.880072  470831 docker.go:218] disabling cri-docker service (if available) ...
	I1003 19:36:36.880182  470831 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1003 19:36:36.897242  470831 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1003 19:36:36.911479  470831 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1003 19:36:37.052811  470831 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1003 19:36:37.188773  470831 docker.go:234] disabling docker service ...
	I1003 19:36:37.188916  470831 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1003 19:36:37.204769  470831 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1003 19:36:37.221757  470831 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1003 19:36:37.365939  470831 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1003 19:36:37.510943  470831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1003 19:36:37.524746  470831 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 19:36:37.543788  470831 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1003 19:36:37.543905  470831 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:36:37.554315  470831 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1003 19:36:37.554469  470831 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:36:37.564239  470831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:36:37.580279  470831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:36:37.595387  470831 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 19:36:37.603905  470831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:36:37.615691  470831 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:36:37.624764  470831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:36:37.633792  470831 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 19:36:37.642054  470831 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 19:36:37.651457  470831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 19:36:37.863516  470831 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1003 19:36:38.329902  470831 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1003 19:36:38.330025  470831 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1003 19:36:38.335449  470831 start.go:563] Will wait 60s for crictl version
	I1003 19:36:38.335577  470831 ssh_runner.go:195] Run: which crictl
	I1003 19:36:38.341293  470831 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1003 19:36:38.390604  470831 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1003 19:36:38.390763  470831 ssh_runner.go:195] Run: crio --version
	I1003 19:36:38.428125  470831 ssh_runner.go:195] Run: crio --version
	I1003 19:36:38.483368  470831 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1003 19:36:34.789323  469677 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1003 19:36:34.789417  469677 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1003 19:36:34.914066  469677 cache_images.go:117] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1003 19:36:34.914105  469677 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 19:36:34.914164  469677 ssh_runner.go:195] Run: which crictl
	I1003 19:36:35.250876  469677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 19:36:35.272126  469677 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1003 19:36:35.272225  469677 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1003 19:36:35.272326  469677 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1003 19:36:35.437308  469677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 19:36:37.594416  469677 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1: (2.322063589s)
	I1003 19:36:37.594439  469677 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1003 19:36:37.594455  469677 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1003 19:36:37.594503  469677 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1003 19:36:37.594555  469677 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.157226594s)
	I1003 19:36:37.594585  469677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 19:36:38.486518  470831 cli_runner.go:164] Run: docker network inspect old-k8s-version-174543 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 19:36:38.506334  470831 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1003 19:36:38.511730  470831 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 19:36:38.529412  470831 kubeadm.go:883] updating cluster {Name:old-k8s-version-174543 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-174543 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1003 19:36:38.529522  470831 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1003 19:36:38.529576  470831 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 19:36:38.585748  470831 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 19:36:38.585776  470831 crio.go:433] Images already preloaded, skipping extraction
	I1003 19:36:38.585830  470831 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 19:36:38.628275  470831 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 19:36:38.628301  470831 cache_images.go:85] Images are preloaded, skipping loading
	I1003 19:36:38.628309  470831 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1003 19:36:38.628411  470831 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-174543 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-174543 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1003 19:36:38.628491  470831 ssh_runner.go:195] Run: crio config
	I1003 19:36:38.721955  470831 cni.go:84] Creating CNI manager for ""
	I1003 19:36:38.721980  470831 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 19:36:38.721998  470831 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1003 19:36:38.722029  470831 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-174543 NodeName:old-k8s-version-174543 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1003 19:36:38.722181  470831 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-174543"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 19:36:38.722270  470831 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1003 19:36:38.734990  470831 binaries.go:44] Found k8s binaries, skipping transfer
	I1003 19:36:38.735069  470831 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1003 19:36:38.743828  470831 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1003 19:36:38.757632  470831 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 19:36:38.773219  470831 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1003 19:36:38.788811  470831 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1003 19:36:38.792770  470831 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 19:36:38.807893  470831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 19:36:38.987564  470831 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 19:36:39.006441  470831 certs.go:69] Setting up /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/old-k8s-version-174543 for IP: 192.168.85.2
	I1003 19:36:39.006529  470831 certs.go:195] generating shared ca certs ...
	I1003 19:36:39.006560  470831 certs.go:227] acquiring lock for ca certs: {Name:mk5a10e6c921326e9c211447576eaeb893259ba7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:36:39.006788  470831 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21625-284583/.minikube/ca.key
	I1003 19:36:39.006870  470831 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.key
	I1003 19:36:39.006906  470831 certs.go:257] generating profile certs ...
	I1003 19:36:39.007047  470831 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/old-k8s-version-174543/client.key
	I1003 19:36:39.007163  470831 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/old-k8s-version-174543/apiserver.key.09eade1b
	I1003 19:36:39.007236  470831 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/old-k8s-version-174543/proxy-client.key
	I1003 19:36:39.007404  470831 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/286434.pem (1338 bytes)
	W1003 19:36:39.007468  470831 certs.go:480] ignoring /home/jenkins/minikube-integration/21625-284583/.minikube/certs/286434_empty.pem, impossibly tiny 0 bytes
	I1003 19:36:39.007494  470831 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 19:36:39.007563  470831 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem (1082 bytes)
	I1003 19:36:39.007612  470831 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem (1123 bytes)
	I1003 19:36:39.007665  470831 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/key.pem (1675 bytes)
	I1003 19:36:39.007744  470831 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem (1708 bytes)
	I1003 19:36:39.008444  470831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 19:36:39.070910  470831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1003 19:36:39.102477  470831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 19:36:39.131859  470831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1003 19:36:39.182220  470831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/old-k8s-version-174543/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1003 19:36:39.222848  470831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/old-k8s-version-174543/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1003 19:36:39.247686  470831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/old-k8s-version-174543/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 19:36:39.285222  470831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/old-k8s-version-174543/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1003 19:36:39.310065  470831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 19:36:39.341730  470831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/certs/286434.pem --> /usr/share/ca-certificates/286434.pem (1338 bytes)
	I1003 19:36:39.391536  470831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem --> /usr/share/ca-certificates/2864342.pem (1708 bytes)
	I1003 19:36:39.419719  470831 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 19:36:39.435425  470831 ssh_runner.go:195] Run: openssl version
	I1003 19:36:39.442930  470831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 19:36:39.453766  470831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 19:36:39.457959  470831 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  3 18:27 /usr/share/ca-certificates/minikubeCA.pem
	I1003 19:36:39.458064  470831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 19:36:39.503965  470831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 19:36:39.513478  470831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/286434.pem && ln -fs /usr/share/ca-certificates/286434.pem /etc/ssl/certs/286434.pem"
	I1003 19:36:39.521868  470831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/286434.pem
	I1003 19:36:39.526259  470831 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  3 18:34 /usr/share/ca-certificates/286434.pem
	I1003 19:36:39.526366  470831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/286434.pem
	I1003 19:36:39.576035  470831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/286434.pem /etc/ssl/certs/51391683.0"
	I1003 19:36:39.587037  470831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2864342.pem && ln -fs /usr/share/ca-certificates/2864342.pem /etc/ssl/certs/2864342.pem"
	I1003 19:36:39.596148  470831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2864342.pem
	I1003 19:36:39.600440  470831 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  3 18:34 /usr/share/ca-certificates/2864342.pem
	I1003 19:36:39.600506  470831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2864342.pem
	I1003 19:36:39.642070  470831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2864342.pem /etc/ssl/certs/3ec20f2e.0"
	I1003 19:36:39.650706  470831 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 19:36:39.654963  470831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1003 19:36:39.699817  470831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1003 19:36:39.741524  470831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1003 19:36:39.810137  470831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1003 19:36:39.867659  470831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1003 19:36:39.963823  470831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1003 19:36:40.065488  470831 kubeadm.go:400] StartCluster: {Name:old-k8s-version-174543 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-174543 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 19:36:40.065602  470831 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1003 19:36:40.065684  470831 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1003 19:36:40.150314  470831 cri.go:89] found id: "9d777d7ca3f3aae2a67724d1a6f8ab7dbc9844b33527c107ab163508dd940d95"
	I1003 19:36:40.150342  470831 cri.go:89] found id: "62ef8d10feba1f56202dc665fa46660c227322fdddf49c3e984ffb9430f54164"
	I1003 19:36:40.150348  470831 cri.go:89] found id: ""
	I1003 19:36:40.150431  470831 ssh_runner.go:195] Run: sudo runc list -f json
	W1003 19:36:40.209366  470831 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-03T19:36:40Z" level=error msg="open /run/runc: no such file or directory"
	I1003 19:36:40.209465  470831 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 19:36:40.238212  470831 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1003 19:36:40.238235  470831 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1003 19:36:40.238287  470831 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1003 19:36:40.309274  470831 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1003 19:36:40.309771  470831 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-174543" does not appear in /home/jenkins/minikube-integration/21625-284583/kubeconfig
	I1003 19:36:40.309937  470831 kubeconfig.go:62] /home/jenkins/minikube-integration/21625-284583/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-174543" cluster setting kubeconfig missing "old-k8s-version-174543" context setting]
	I1003 19:36:40.310734  470831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/kubeconfig: {Name:mkc1323fd87f4a78231a26d2dab0dff7feecf1e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:36:40.317747  470831 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1003 19:36:40.341224  470831 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1003 19:36:40.341310  470831 kubeadm.go:601] duration metric: took 103.068172ms to restartPrimaryControlPlane
	I1003 19:36:40.341334  470831 kubeadm.go:402] duration metric: took 275.871441ms to StartCluster
	I1003 19:36:40.341373  470831 settings.go:142] acquiring lock: {Name:mkc95577dbc448e3409dfa2b5e53a3a1327cb451 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:36:40.341463  470831 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21625-284583/kubeconfig
	I1003 19:36:40.342096  470831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/kubeconfig: {Name:mkc1323fd87f4a78231a26d2dab0dff7feecf1e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:36:40.342580  470831 config.go:182] Loaded profile config "old-k8s-version-174543": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1003 19:36:40.342648  470831 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1003 19:36:40.342700  470831 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1003 19:36:40.342845  470831 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-174543"
	I1003 19:36:40.342859  470831 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-174543"
	W1003 19:36:40.342865  470831 addons.go:247] addon storage-provisioner should already be in state true
	I1003 19:36:40.342887  470831 host.go:66] Checking if "old-k8s-version-174543" exists ...
	I1003 19:36:40.343383  470831 cli_runner.go:164] Run: docker container inspect old-k8s-version-174543 --format={{.State.Status}}
	I1003 19:36:40.343941  470831 addons.go:69] Setting dashboard=true in profile "old-k8s-version-174543"
	I1003 19:36:40.343965  470831 addons.go:238] Setting addon dashboard=true in "old-k8s-version-174543"
	W1003 19:36:40.343972  470831 addons.go:247] addon dashboard should already be in state true
	I1003 19:36:40.343995  470831 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-174543"
	I1003 19:36:40.344029  470831 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-174543"
	I1003 19:36:40.344003  470831 host.go:66] Checking if "old-k8s-version-174543" exists ...
	I1003 19:36:40.344381  470831 cli_runner.go:164] Run: docker container inspect old-k8s-version-174543 --format={{.State.Status}}
	I1003 19:36:40.344524  470831 cli_runner.go:164] Run: docker container inspect old-k8s-version-174543 --format={{.State.Status}}
	I1003 19:36:40.355866  470831 out.go:179] * Verifying Kubernetes components...
	I1003 19:36:40.368882  470831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 19:36:40.393921  470831 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-174543"
	W1003 19:36:40.393943  470831 addons.go:247] addon default-storageclass should already be in state true
	I1003 19:36:40.393969  470831 host.go:66] Checking if "old-k8s-version-174543" exists ...
	I1003 19:36:40.394399  470831 cli_runner.go:164] Run: docker container inspect old-k8s-version-174543 --format={{.State.Status}}
	I1003 19:36:40.408117  470831 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1003 19:36:40.411103  470831 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1003 19:36:40.414544  470831 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1003 19:36:40.414581  470831 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1003 19:36:40.414658  470831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-174543
	I1003 19:36:40.416772  470831 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 19:36:39.907186  469677 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.312579852s)
	I1003 19:36:39.907232  469677 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1003 19:36:39.907321  469677 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1003 19:36:39.907451  469677 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (2.312938627s)
	I1003 19:36:39.907466  469677 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1003 19:36:39.907481  469677 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1003 19:36:39.907512  469677 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1003 19:36:42.321165  469677 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (2.413625674s)
	I1003 19:36:42.321196  469677 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1003 19:36:42.321217  469677 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1003 19:36:42.321273  469677 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1003 19:36:42.321343  469677 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.414004815s)
	I1003 19:36:42.321363  469677 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1003 19:36:42.321381  469677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1003 19:36:44.503824  469677 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (2.182522238s)
	I1003 19:36:44.503853  469677 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1003 19:36:44.503873  469677 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1003 19:36:44.503931  469677 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1003 19:36:40.420887  470831 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 19:36:40.420912  470831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1003 19:36:40.420985  470831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-174543
	I1003 19:36:40.436447  470831 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1003 19:36:40.436474  470831 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1003 19:36:40.436538  470831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-174543
	I1003 19:36:40.468390  470831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/old-k8s-version-174543/id_rsa Username:docker}
	I1003 19:36:40.480958  470831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/old-k8s-version-174543/id_rsa Username:docker}
	I1003 19:36:40.491657  470831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/old-k8s-version-174543/id_rsa Username:docker}
	I1003 19:36:40.827254  470831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1003 19:36:40.871029  470831 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 19:36:40.871939  470831 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1003 19:36:40.871991  470831 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1003 19:36:40.905985  470831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 19:36:41.091414  470831 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1003 19:36:41.091481  470831 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1003 19:36:41.259108  470831 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1003 19:36:41.259190  470831 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1003 19:36:41.387179  470831 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1003 19:36:41.387248  470831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1003 19:36:41.463609  470831 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1003 19:36:41.463688  470831 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1003 19:36:41.521284  470831 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1003 19:36:41.521352  470831 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1003 19:36:41.571662  470831 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1003 19:36:41.571743  470831 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1003 19:36:41.606256  470831 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1003 19:36:41.606330  470831 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1003 19:36:41.633779  470831 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1003 19:36:41.633855  470831 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1003 19:36:41.682876  470831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1003 19:36:46.122072  469677 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.618115518s)
	I1003 19:36:46.122096  469677 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1003 19:36:46.122116  469677 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1003 19:36:46.122163  469677 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	I1003 19:36:50.486681  470831 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.659350106s)
	I1003 19:36:50.486868  470831 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (9.615770052s)
	I1003 19:36:50.486999  470831 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-174543" to be "Ready" ...
	I1003 19:36:50.563494  470831 node_ready.go:49] node "old-k8s-version-174543" is "Ready"
	I1003 19:36:50.563627  470831 node_ready.go:38] duration metric: took 76.592907ms for node "old-k8s-version-174543" to be "Ready" ...
	I1003 19:36:50.563657  470831 api_server.go:52] waiting for apiserver process to appear ...
	I1003 19:36:50.563753  470831 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 19:36:51.281166  470831 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.375104087s)
	I1003 19:36:52.074932  470831 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (10.391968891s)
	I1003 19:36:52.075163  470831 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.511377919s)
	I1003 19:36:52.075208  470831 api_server.go:72] duration metric: took 11.73243648s to wait for apiserver process to appear ...
	I1003 19:36:52.075222  470831 api_server.go:88] waiting for apiserver healthz status ...
	I1003 19:36:52.075241  470831 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1003 19:36:52.078448  470831 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-174543 addons enable metrics-server
	
	I1003 19:36:52.081625  470831 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1003 19:36:51.524837  469677 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (5.402654076s)
	I1003 19:36:51.524919  469677 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1003 19:36:51.524959  469677 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1003 19:36:51.525037  469677 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1003 19:36:52.294734  469677 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1003 19:36:52.294769  469677 cache_images.go:124] Successfully loaded all cached images
	I1003 19:36:52.294775  469677 cache_images.go:93] duration metric: took 18.863661907s to LoadCachedImages
	I1003 19:36:52.294786  469677 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1003 19:36:52.294879  469677 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-643397 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-643397 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1003 19:36:52.294960  469677 ssh_runner.go:195] Run: crio config
	I1003 19:36:52.364057  469677 cni.go:84] Creating CNI manager for ""
	I1003 19:36:52.364129  469677 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 19:36:52.364175  469677 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1003 19:36:52.364218  469677 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-643397 NodeName:no-preload-643397 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1003 19:36:52.364407  469677 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-643397"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 19:36:52.364517  469677 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1003 19:36:52.372571  469677 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1003 19:36:52.372685  469677 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1003 19:36:52.380593  469677 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
	I1003 19:36:52.380716  469677 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1003 19:36:52.380924  469677 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21625-284583/.minikube/cache/linux/arm64/v1.34.1/kubeadm
	I1003 19:36:52.381339  469677 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/21625-284583/.minikube/cache/linux/arm64/v1.34.1/kubelet
	I1003 19:36:52.386113  469677 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1003 19:36:52.386150  469677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/cache/linux/arm64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (58130616 bytes)
	I1003 19:36:53.545881  469677 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1003 19:36:53.549863  469677 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1003 19:36:53.549894  469677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/cache/linux/arm64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (71434424 bytes)
	I1003 19:36:53.709681  469677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 19:36:53.732427  469677 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1003 19:36:53.746177  469677 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1003 19:36:53.746216  469677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/cache/linux/arm64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (56426788 bytes)
	I1003 19:36:54.331746  469677 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1003 19:36:54.343207  469677 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1003 19:36:54.358285  469677 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 19:36:54.373325  469677 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1003 19:36:54.388029  469677 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1003 19:36:54.393493  469677 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 19:36:54.406615  469677 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 19:36:54.534391  469677 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 19:36:54.563833  469677 certs.go:69] Setting up /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397 for IP: 192.168.76.2
	I1003 19:36:54.563855  469677 certs.go:195] generating shared ca certs ...
	I1003 19:36:54.563872  469677 certs.go:227] acquiring lock for ca certs: {Name:mk5a10e6c921326e9c211447576eaeb893259ba7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:36:54.564060  469677 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21625-284583/.minikube/ca.key
	I1003 19:36:54.564138  469677 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.key
	I1003 19:36:54.564177  469677 certs.go:257] generating profile certs ...
	I1003 19:36:54.564260  469677 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/client.key
	I1003 19:36:54.564282  469677 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/client.crt with IP's: []
	I1003 19:36:52.084106  470831 addons.go:514] duration metric: took 11.741369469s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1003 19:36:52.092617  470831 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1003 19:36:52.094379  470831 api_server.go:141] control plane version: v1.28.0
	I1003 19:36:52.094453  470831 api_server.go:131] duration metric: took 19.211581ms to wait for apiserver health ...
	I1003 19:36:52.094475  470831 system_pods.go:43] waiting for kube-system pods to appear ...
	I1003 19:36:52.104999  470831 system_pods.go:59] 8 kube-system pods found
	I1003 19:36:52.105093  470831 system_pods.go:61] "coredns-5dd5756b68-6grkm" [678e0c98-f42a-4a69-8d50-a83a82886a69] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1003 19:36:52.105116  470831 system_pods.go:61] "etcd-old-k8s-version-174543" [8550f5a6-a2dc-4e9b-b623-9d0d9dfd66fd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1003 19:36:52.105151  470831 system_pods.go:61] "kindnet-rwdd6" [3cc7fea5-9441-4250-80b2-05aff82ce727] Running
	I1003 19:36:52.105178  470831 system_pods.go:61] "kube-apiserver-old-k8s-version-174543" [b8ce8574-fafd-4466-b9b8-b12c3ae221b7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1003 19:36:52.105201  470831 system_pods.go:61] "kube-controller-manager-old-k8s-version-174543" [aea29031-128c-4683-b165-ef6f11b79e72] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1003 19:36:52.105235  470831 system_pods.go:61] "kube-proxy-v4mqk" [50d549bb-e122-45af-8dad-b599f07053fd] Running
	I1003 19:36:52.105261  470831 system_pods.go:61] "kube-scheduler-old-k8s-version-174543" [3b73907b-8446-4189-9d96-e02a6c332aa6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1003 19:36:52.105279  470831 system_pods.go:61] "storage-provisioner" [8db23fd8-6872-4901-b61f-a88ac26407a7] Running
	I1003 19:36:52.105314  470831 system_pods.go:74] duration metric: took 10.804885ms to wait for pod list to return data ...
	I1003 19:36:52.105341  470831 default_sa.go:34] waiting for default service account to be created ...
	I1003 19:36:52.109408  470831 default_sa.go:45] found service account: "default"
	I1003 19:36:52.109473  470831 default_sa.go:55] duration metric: took 4.111364ms for default service account to be created ...
	I1003 19:36:52.109507  470831 system_pods.go:116] waiting for k8s-apps to be running ...
	I1003 19:36:52.113674  470831 system_pods.go:86] 8 kube-system pods found
	I1003 19:36:52.113760  470831 system_pods.go:89] "coredns-5dd5756b68-6grkm" [678e0c98-f42a-4a69-8d50-a83a82886a69] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1003 19:36:52.113785  470831 system_pods.go:89] "etcd-old-k8s-version-174543" [8550f5a6-a2dc-4e9b-b623-9d0d9dfd66fd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1003 19:36:52.113822  470831 system_pods.go:89] "kindnet-rwdd6" [3cc7fea5-9441-4250-80b2-05aff82ce727] Running
	I1003 19:36:52.113847  470831 system_pods.go:89] "kube-apiserver-old-k8s-version-174543" [b8ce8574-fafd-4466-b9b8-b12c3ae221b7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1003 19:36:52.113871  470831 system_pods.go:89] "kube-controller-manager-old-k8s-version-174543" [aea29031-128c-4683-b165-ef6f11b79e72] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1003 19:36:52.113906  470831 system_pods.go:89] "kube-proxy-v4mqk" [50d549bb-e122-45af-8dad-b599f07053fd] Running
	I1003 19:36:52.113933  470831 system_pods.go:89] "kube-scheduler-old-k8s-version-174543" [3b73907b-8446-4189-9d96-e02a6c332aa6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1003 19:36:52.113953  470831 system_pods.go:89] "storage-provisioner" [8db23fd8-6872-4901-b61f-a88ac26407a7] Running
	I1003 19:36:52.113990  470831 system_pods.go:126] duration metric: took 4.462457ms to wait for k8s-apps to be running ...
	I1003 19:36:52.114017  470831 system_svc.go:44] waiting for kubelet service to be running ....
	I1003 19:36:52.114104  470831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 19:36:52.129798  470831 system_svc.go:56] duration metric: took 15.772795ms WaitForService to wait for kubelet
	I1003 19:36:52.129872  470831 kubeadm.go:586] duration metric: took 11.787098529s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 19:36:52.129906  470831 node_conditions.go:102] verifying NodePressure condition ...
	I1003 19:36:52.133219  470831 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1003 19:36:52.133315  470831 node_conditions.go:123] node cpu capacity is 2
	I1003 19:36:52.133345  470831 node_conditions.go:105] duration metric: took 3.421679ms to run NodePressure ...
	I1003 19:36:52.133386  470831 start.go:241] waiting for startup goroutines ...
	I1003 19:36:52.133413  470831 start.go:246] waiting for cluster config update ...
	I1003 19:36:52.133439  470831 start.go:255] writing updated cluster config ...
	I1003 19:36:52.133757  470831 ssh_runner.go:195] Run: rm -f paused
	I1003 19:36:52.138185  470831 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1003 19:36:52.143212  470831 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-6grkm" in "kube-system" namespace to be "Ready" or be gone ...
	W1003 19:36:54.151250  470831 pod_ready.go:104] pod "coredns-5dd5756b68-6grkm" is not "Ready", error: <nil>
	I1003 19:36:54.723061  469677 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/client.crt ...
	I1003 19:36:54.723102  469677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/client.crt: {Name:mkea5bfb95d8fdb117792960e5221a8bc9115b50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:36:54.723346  469677 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/client.key ...
	I1003 19:36:54.723364  469677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/client.key: {Name:mkf4738ba9e553f9f9be1784d2e0f6c375d691df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:36:54.723521  469677 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/apiserver.key.ee2e84a9
	I1003 19:36:54.723538  469677 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/apiserver.crt.ee2e84a9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1003 19:36:55.207794  469677 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/apiserver.crt.ee2e84a9 ...
	I1003 19:36:55.207868  469677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/apiserver.crt.ee2e84a9: {Name:mk19ce55b7f476d867b58a46a648e11db58f5a77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:36:55.208085  469677 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/apiserver.key.ee2e84a9 ...
	I1003 19:36:55.208125  469677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/apiserver.key.ee2e84a9: {Name:mkc44185d4065ec27cc61b06ce0bc9de1613954b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:36:55.208247  469677 certs.go:382] copying /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/apiserver.crt.ee2e84a9 -> /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/apiserver.crt
	I1003 19:36:55.208353  469677 certs.go:386] copying /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/apiserver.key.ee2e84a9 -> /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/apiserver.key
	I1003 19:36:55.208436  469677 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/proxy-client.key
	I1003 19:36:55.208469  469677 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/proxy-client.crt with IP's: []
	I1003 19:36:56.304461  469677 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/proxy-client.crt ...
	I1003 19:36:56.304494  469677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/proxy-client.crt: {Name:mkb08c6c1be2a70b1e5ff3f6ddde2e4e9c47ee6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:36:56.304684  469677 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/proxy-client.key ...
	I1003 19:36:56.304701  469677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/proxy-client.key: {Name:mk1a2d478a1729a17beec4d720ca7883e92f1491 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:36:56.304906  469677 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/286434.pem (1338 bytes)
	W1003 19:36:56.304950  469677 certs.go:480] ignoring /home/jenkins/minikube-integration/21625-284583/.minikube/certs/286434_empty.pem, impossibly tiny 0 bytes
	I1003 19:36:56.304965  469677 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 19:36:56.304990  469677 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem (1082 bytes)
	I1003 19:36:56.305016  469677 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem (1123 bytes)
	I1003 19:36:56.305042  469677 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/key.pem (1675 bytes)
	I1003 19:36:56.305090  469677 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem (1708 bytes)
	I1003 19:36:56.305635  469677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 19:36:56.325874  469677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1003 19:36:56.344837  469677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 19:36:56.363293  469677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1003 19:36:56.381085  469677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1003 19:36:56.400919  469677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1003 19:36:56.419228  469677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 19:36:56.438028  469677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1003 19:36:56.455936  469677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem --> /usr/share/ca-certificates/2864342.pem (1708 bytes)
	I1003 19:36:56.474212  469677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 19:36:56.491955  469677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/certs/286434.pem --> /usr/share/ca-certificates/286434.pem (1338 bytes)
	I1003 19:36:56.510065  469677 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 19:36:56.524259  469677 ssh_runner.go:195] Run: openssl version
	I1003 19:36:56.534016  469677 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/286434.pem && ln -fs /usr/share/ca-certificates/286434.pem /etc/ssl/certs/286434.pem"
	I1003 19:36:56.543214  469677 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/286434.pem
	I1003 19:36:56.547972  469677 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  3 18:34 /usr/share/ca-certificates/286434.pem
	I1003 19:36:56.548066  469677 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/286434.pem
	I1003 19:36:56.591319  469677 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/286434.pem /etc/ssl/certs/51391683.0"
	I1003 19:36:56.600012  469677 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2864342.pem && ln -fs /usr/share/ca-certificates/2864342.pem /etc/ssl/certs/2864342.pem"
	I1003 19:36:56.608753  469677 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2864342.pem
	I1003 19:36:56.612596  469677 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  3 18:34 /usr/share/ca-certificates/2864342.pem
	I1003 19:36:56.612712  469677 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2864342.pem
	I1003 19:36:56.654061  469677 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2864342.pem /etc/ssl/certs/3ec20f2e.0"
	I1003 19:36:56.662615  469677 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 19:36:56.672208  469677 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 19:36:56.676572  469677 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  3 18:27 /usr/share/ca-certificates/minikubeCA.pem
	I1003 19:36:56.676683  469677 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 19:36:56.717711  469677 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 19:36:56.729797  469677 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 19:36:56.737585  469677 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1003 19:36:56.737637  469677 kubeadm.go:400] StartCluster: {Name:no-preload-643397 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-643397 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 19:36:56.737710  469677 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1003 19:36:56.737768  469677 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1003 19:36:56.780132  469677 cri.go:89] found id: ""
	I1003 19:36:56.780210  469677 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 19:36:56.789811  469677 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1003 19:36:56.797624  469677 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1003 19:36:56.797736  469677 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 19:36:56.805674  469677 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1003 19:36:56.805698  469677 kubeadm.go:157] found existing configuration files:
	
	I1003 19:36:56.805776  469677 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1003 19:36:56.814539  469677 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1003 19:36:56.814648  469677 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1003 19:36:56.822346  469677 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1003 19:36:56.829610  469677 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1003 19:36:56.829675  469677 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1003 19:36:56.836933  469677 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1003 19:36:56.852916  469677 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1003 19:36:56.852987  469677 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1003 19:36:56.863551  469677 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1003 19:36:56.873992  469677 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1003 19:36:56.874054  469677 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1003 19:36:56.882629  469677 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1003 19:36:56.923304  469677 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1003 19:36:56.923637  469677 kubeadm.go:318] [preflight] Running pre-flight checks
	I1003 19:36:56.956544  469677 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1003 19:36:56.956622  469677 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1003 19:36:56.956664  469677 kubeadm.go:318] OS: Linux
	I1003 19:36:56.956718  469677 kubeadm.go:318] CGROUPS_CPU: enabled
	I1003 19:36:56.956801  469677 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1003 19:36:56.956857  469677 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1003 19:36:56.956912  469677 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1003 19:36:56.956970  469677 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1003 19:36:56.957025  469677 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1003 19:36:56.957075  469677 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1003 19:36:56.957129  469677 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1003 19:36:56.957182  469677 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1003 19:36:57.030788  469677 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1003 19:36:57.030916  469677 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1003 19:36:57.031019  469677 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1003 19:36:57.050939  469677 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1003 19:36:57.055510  469677 out.go:252]   - Generating certificates and keys ...
	I1003 19:36:57.055689  469677 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1003 19:36:57.055808  469677 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1003 19:36:57.836445  469677 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1003 19:36:57.912322  469677 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1003 19:36:58.196922  469677 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1003 19:36:58.587327  469677 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1003 19:36:58.751249  469677 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1003 19:36:58.751615  469677 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-643397] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1003 19:36:58.838899  469677 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1003 19:36:58.839218  469677 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-643397] and IPs [192.168.76.2 127.0.0.1 ::1]
	W1003 19:36:56.152283  470831 pod_ready.go:104] pod "coredns-5dd5756b68-6grkm" is not "Ready", error: <nil>
	W1003 19:36:58.650953  470831 pod_ready.go:104] pod "coredns-5dd5756b68-6grkm" is not "Ready", error: <nil>
	I1003 19:36:59.776416  469677 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1003 19:37:00.060836  469677 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1003 19:37:00.317856  469677 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1003 19:37:00.318288  469677 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1003 19:37:00.476997  469677 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1003 19:37:00.676428  469677 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1003 19:37:00.863403  469677 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1003 19:37:01.550407  469677 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1003 19:37:02.648554  469677 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1003 19:37:02.648666  469677 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1003 19:37:02.648780  469677 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1003 19:37:02.652441  469677 out.go:252]   - Booting up control plane ...
	I1003 19:37:02.652564  469677 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1003 19:37:02.652647  469677 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1003 19:37:02.652719  469677 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1003 19:37:02.670695  469677 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1003 19:37:02.670820  469677 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1003 19:37:02.682650  469677 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1003 19:37:02.682776  469677 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1003 19:37:02.682820  469677 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1003 19:37:02.856554  469677 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1003 19:37:02.856720  469677 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1003 19:37:03.858878  469677 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.002481179s
	I1003 19:37:03.862941  469677 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1003 19:37:03.863050  469677 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1003 19:37:03.863150  469677 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1003 19:37:03.863894  469677 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1003 19:37:00.658147  470831 pod_ready.go:104] pod "coredns-5dd5756b68-6grkm" is not "Ready", error: <nil>
	W1003 19:37:03.151308  470831 pod_ready.go:104] pod "coredns-5dd5756b68-6grkm" is not "Ready", error: <nil>
	I1003 19:37:08.071258  469677 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 4.207141162s
	W1003 19:37:05.651702  470831 pod_ready.go:104] pod "coredns-5dd5756b68-6grkm" is not "Ready", error: <nil>
	W1003 19:37:07.652884  470831 pod_ready.go:104] pod "coredns-5dd5756b68-6grkm" is not "Ready", error: <nil>
	W1003 19:37:09.653756  470831 pod_ready.go:104] pod "coredns-5dd5756b68-6grkm" is not "Ready", error: <nil>
	I1003 19:37:10.649991  469677 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 6.781485326s
	I1003 19:37:12.866223  469677 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 9.002847252s
	I1003 19:37:12.888325  469677 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1003 19:37:12.909020  469677 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1003 19:37:12.954407  469677 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1003 19:37:12.954615  469677 kubeadm.go:318] [mark-control-plane] Marking the node no-preload-643397 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1003 19:37:12.978776  469677 kubeadm.go:318] [bootstrap-token] Using token: dz2q20.oxlpcyn3z86knmhs
	I1003 19:37:12.981972  469677 out.go:252]   - Configuring RBAC rules ...
	I1003 19:37:12.982125  469677 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1003 19:37:13.013673  469677 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1003 19:37:13.047764  469677 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1003 19:37:13.065884  469677 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1003 19:37:13.070997  469677 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1003 19:37:13.076272  469677 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1003 19:37:13.273866  469677 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1003 19:37:13.818579  469677 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1003 19:37:14.284423  469677 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1003 19:37:14.285888  469677 kubeadm.go:318] 
	I1003 19:37:14.285967  469677 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1003 19:37:14.285973  469677 kubeadm.go:318] 
	I1003 19:37:14.286054  469677 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1003 19:37:14.286060  469677 kubeadm.go:318] 
	I1003 19:37:14.286087  469677 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1003 19:37:14.286473  469677 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1003 19:37:14.286531  469677 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1003 19:37:14.286537  469677 kubeadm.go:318] 
	I1003 19:37:14.286593  469677 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1003 19:37:14.286598  469677 kubeadm.go:318] 
	I1003 19:37:14.286651  469677 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1003 19:37:14.286656  469677 kubeadm.go:318] 
	I1003 19:37:14.286711  469677 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1003 19:37:14.286789  469677 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1003 19:37:14.286872  469677 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1003 19:37:14.286883  469677 kubeadm.go:318] 
	I1003 19:37:14.287175  469677 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1003 19:37:14.287279  469677 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1003 19:37:14.287285  469677 kubeadm.go:318] 
	I1003 19:37:14.287544  469677 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token dz2q20.oxlpcyn3z86knmhs \
	I1003 19:37:14.287665  469677 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:f66ff31263aa4cda6b17caa2076838d6a1918275f1c2773b90b119c0d4a4d71a \
	I1003 19:37:14.287847  469677 kubeadm.go:318] 	--control-plane 
	I1003 19:37:14.287875  469677 kubeadm.go:318] 
	I1003 19:37:14.288110  469677 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1003 19:37:14.288128  469677 kubeadm.go:318] 
	I1003 19:37:14.288393  469677 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token dz2q20.oxlpcyn3z86knmhs \
	I1003 19:37:14.288650  469677 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:f66ff31263aa4cda6b17caa2076838d6a1918275f1c2773b90b119c0d4a4d71a 
	I1003 19:37:14.293244  469677 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1003 19:37:14.293485  469677 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1003 19:37:14.293601  469677 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1003 19:37:14.293622  469677 cni.go:84] Creating CNI manager for ""
	I1003 19:37:14.293634  469677 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 19:37:14.299735  469677 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1003 19:37:14.303086  469677 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1003 19:37:14.309906  469677 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1003 19:37:14.309930  469677 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1003 19:37:14.336322  469677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	W1003 19:37:11.655175  470831 pod_ready.go:104] pod "coredns-5dd5756b68-6grkm" is not "Ready", error: <nil>
	W1003 19:37:13.657155  470831 pod_ready.go:104] pod "coredns-5dd5756b68-6grkm" is not "Ready", error: <nil>
	I1003 19:37:14.811333  469677 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1003 19:37:14.811471  469677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 19:37:14.811560  469677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-643397 minikube.k8s.io/updated_at=2025_10_03T19_37_14_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=a43873c79fc22f8b1ccd29d3dfa635d392b09335 minikube.k8s.io/name=no-preload-643397 minikube.k8s.io/primary=true
	I1003 19:37:15.177419  469677 ops.go:34] apiserver oom_adj: -16
	I1003 19:37:15.177535  469677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 19:37:15.678053  469677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 19:37:16.177675  469677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 19:37:16.678465  469677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 19:37:17.177605  469677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 19:37:17.678441  469677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 19:37:18.177833  469677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 19:37:18.678473  469677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 19:37:19.177998  469677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 19:37:19.303395  469677 kubeadm.go:1113] duration metric: took 4.491974475s to wait for elevateKubeSystemPrivileges
	I1003 19:37:19.303422  469677 kubeadm.go:402] duration metric: took 22.565789399s to StartCluster
	I1003 19:37:19.303440  469677 settings.go:142] acquiring lock: {Name:mkc95577dbc448e3409dfa2b5e53a3a1327cb451 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:37:19.303498  469677 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21625-284583/kubeconfig
	I1003 19:37:19.304437  469677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/kubeconfig: {Name:mkc1323fd87f4a78231a26d2dab0dff7feecf1e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:37:19.304655  469677 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1003 19:37:19.304785  469677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1003 19:37:19.305028  469677 config.go:182] Loaded profile config "no-preload-643397": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 19:37:19.305059  469677 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1003 19:37:19.305117  469677 addons.go:69] Setting storage-provisioner=true in profile "no-preload-643397"
	I1003 19:37:19.305134  469677 addons.go:238] Setting addon storage-provisioner=true in "no-preload-643397"
	I1003 19:37:19.305155  469677 host.go:66] Checking if "no-preload-643397" exists ...
	I1003 19:37:19.305706  469677 addons.go:69] Setting default-storageclass=true in profile "no-preload-643397"
	I1003 19:37:19.305744  469677 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-643397"
	I1003 19:37:19.306024  469677 cli_runner.go:164] Run: docker container inspect no-preload-643397 --format={{.State.Status}}
	I1003 19:37:19.306036  469677 cli_runner.go:164] Run: docker container inspect no-preload-643397 --format={{.State.Status}}
	I1003 19:37:19.309052  469677 out.go:179] * Verifying Kubernetes components...
	I1003 19:37:19.315256  469677 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 19:37:19.344959  469677 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 19:37:19.350292  469677 addons.go:238] Setting addon default-storageclass=true in "no-preload-643397"
	I1003 19:37:19.350335  469677 host.go:66] Checking if "no-preload-643397" exists ...
	I1003 19:37:19.350745  469677 cli_runner.go:164] Run: docker container inspect no-preload-643397 --format={{.State.Status}}
	I1003 19:37:19.350945  469677 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 19:37:19.350970  469677 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1003 19:37:19.351010  469677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-643397
	I1003 19:37:19.400750  469677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/no-preload-643397/id_rsa Username:docker}
	I1003 19:37:19.407421  469677 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1003 19:37:19.407447  469677 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1003 19:37:19.407509  469677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-643397
	I1003 19:37:19.433989  469677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/no-preload-643397/id_rsa Username:docker}
	W1003 19:37:16.149238  470831 pod_ready.go:104] pod "coredns-5dd5756b68-6grkm" is not "Ready", error: <nil>
	W1003 19:37:18.649271  470831 pod_ready.go:104] pod "coredns-5dd5756b68-6grkm" is not "Ready", error: <nil>
	I1003 19:37:19.715486  469677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1003 19:37:19.715593  469677 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 19:37:19.772102  469677 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1003 19:37:19.820338  469677 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 19:37:20.371803  469677 node_ready.go:35] waiting up to 6m0s for node "no-preload-643397" to be "Ready" ...
	I1003 19:37:20.371912  469677 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1003 19:37:20.880944  469677 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-643397" context rescaled to 1 replicas
	I1003 19:37:20.986839  469677 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.166463205s)
	I1003 19:37:20.990124  469677 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1003 19:37:20.993057  469677 addons.go:514] duration metric: took 1.687963193s for enable addons: enabled=[default-storageclass storage-provisioner]
	W1003 19:37:22.376123  469677 node_ready.go:57] node "no-preload-643397" has "Ready":"False" status (will retry)
	W1003 19:37:20.649460  470831 pod_ready.go:104] pod "coredns-5dd5756b68-6grkm" is not "Ready", error: <nil>
	W1003 19:37:22.650326  470831 pod_ready.go:104] pod "coredns-5dd5756b68-6grkm" is not "Ready", error: <nil>
	W1003 19:37:25.150069  470831 pod_ready.go:104] pod "coredns-5dd5756b68-6grkm" is not "Ready", error: <nil>
	W1003 19:37:24.875623  469677 node_ready.go:57] node "no-preload-643397" has "Ready":"False" status (will retry)
	W1003 19:37:26.875771  469677 node_ready.go:57] node "no-preload-643397" has "Ready":"False" status (will retry)
	W1003 19:37:29.375746  469677 node_ready.go:57] node "no-preload-643397" has "Ready":"False" status (will retry)
	W1003 19:37:27.150205  470831 pod_ready.go:104] pod "coredns-5dd5756b68-6grkm" is not "Ready", error: <nil>
	I1003 19:37:28.649438  470831 pod_ready.go:94] pod "coredns-5dd5756b68-6grkm" is "Ready"
	I1003 19:37:28.649469  470831 pod_ready.go:86] duration metric: took 36.506186575s for pod "coredns-5dd5756b68-6grkm" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:37:28.652598  470831 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-174543" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:37:28.658917  470831 pod_ready.go:94] pod "etcd-old-k8s-version-174543" is "Ready"
	I1003 19:37:28.658946  470831 pod_ready.go:86] duration metric: took 6.321554ms for pod "etcd-old-k8s-version-174543" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:37:28.662163  470831 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-174543" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:37:28.668091  470831 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-174543" is "Ready"
	I1003 19:37:28.668117  470831 pod_ready.go:86] duration metric: took 5.928958ms for pod "kube-apiserver-old-k8s-version-174543" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:37:28.671688  470831 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-174543" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:37:28.846760  470831 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-174543" is "Ready"
	I1003 19:37:28.846792  470831 pod_ready.go:86] duration metric: took 175.076433ms for pod "kube-controller-manager-old-k8s-version-174543" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:37:29.047756  470831 pod_ready.go:83] waiting for pod "kube-proxy-v4mqk" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:37:29.448122  470831 pod_ready.go:94] pod "kube-proxy-v4mqk" is "Ready"
	I1003 19:37:29.448147  470831 pod_ready.go:86] duration metric: took 400.307649ms for pod "kube-proxy-v4mqk" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:37:29.647912  470831 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-174543" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:37:30.050088  470831 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-174543" is "Ready"
	I1003 19:37:30.050180  470831 pod_ready.go:86] duration metric: took 402.239657ms for pod "kube-scheduler-old-k8s-version-174543" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:37:30.050210  470831 pod_ready.go:40] duration metric: took 37.911945126s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1003 19:37:30.129993  470831 start.go:623] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1003 19:37:30.133282  470831 out.go:203] 
	W1003 19:37:30.136402  470831 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1003 19:37:30.139579  470831 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1003 19:37:30.142604  470831 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-174543" cluster and "default" namespace by default
	W1003 19:37:31.376152  469677 node_ready.go:57] node "no-preload-643397" has "Ready":"False" status (will retry)
	I1003 19:37:33.877493  469677 node_ready.go:49] node "no-preload-643397" is "Ready"
	I1003 19:37:33.877520  469677 node_ready.go:38] duration metric: took 13.504811463s for node "no-preload-643397" to be "Ready" ...
	I1003 19:37:33.877534  469677 api_server.go:52] waiting for apiserver process to appear ...
	I1003 19:37:33.877594  469677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 19:37:33.895506  469677 api_server.go:72] duration metric: took 14.590822912s to wait for apiserver process to appear ...
	I1003 19:37:33.895531  469677 api_server.go:88] waiting for apiserver healthz status ...
	I1003 19:37:33.895550  469677 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1003 19:37:33.909806  469677 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1003 19:37:33.910971  469677 api_server.go:141] control plane version: v1.34.1
	I1003 19:37:33.911000  469677 api_server.go:131] duration metric: took 15.46149ms to wait for apiserver health ...
	I1003 19:37:33.911010  469677 system_pods.go:43] waiting for kube-system pods to appear ...
	I1003 19:37:33.916639  469677 system_pods.go:59] 8 kube-system pods found
	I1003 19:37:33.916673  469677 system_pods.go:61] "coredns-66bc5c9577-h8n5p" [d7f4ec9d-9c68-4332-b6c7-e52f424dcd1e] Pending
	I1003 19:37:33.916680  469677 system_pods.go:61] "etcd-no-preload-643397" [642f5548-1caf-4bb4-9780-63e00e8b0a3c] Running
	I1003 19:37:33.916685  469677 system_pods.go:61] "kindnet-7zwct" [bd0ecfeb-3764-425f-b7ae-e6f5b3e161d8] Running
	I1003 19:37:33.916689  469677 system_pods.go:61] "kube-apiserver-no-preload-643397" [6e4aa6fd-218d-45ce-a0d9-a1736936d2d3] Running
	I1003 19:37:33.916694  469677 system_pods.go:61] "kube-controller-manager-no-preload-643397" [29843b74-a1d2-46af-ac5e-06f4d53a0ac4] Running
	I1003 19:37:33.916698  469677 system_pods.go:61] "kube-proxy-lcs2q" [f25c0891-1202-477f-9cc9-5e41c3f1b9fb] Running
	I1003 19:37:33.916702  469677 system_pods.go:61] "kube-scheduler-no-preload-643397" [6865d4a0-3590-465e-81e1-927d271170c0] Running
	I1003 19:37:33.916710  469677 system_pods.go:61] "storage-provisioner" [355c16e4-3158-4ffc-9379-57747ed71cca] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1003 19:37:33.916717  469677 system_pods.go:74] duration metric: took 5.701435ms to wait for pod list to return data ...
	I1003 19:37:33.916791  469677 default_sa.go:34] waiting for default service account to be created ...
	I1003 19:37:33.929062  469677 default_sa.go:45] found service account: "default"
	I1003 19:37:33.929096  469677 default_sa.go:55] duration metric: took 12.295124ms for default service account to be created ...
	I1003 19:37:33.929107  469677 system_pods.go:116] waiting for k8s-apps to be running ...
	I1003 19:37:33.935443  469677 system_pods.go:86] 8 kube-system pods found
	I1003 19:37:33.935482  469677 system_pods.go:89] "coredns-66bc5c9577-h8n5p" [d7f4ec9d-9c68-4332-b6c7-e52f424dcd1e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1003 19:37:33.935488  469677 system_pods.go:89] "etcd-no-preload-643397" [642f5548-1caf-4bb4-9780-63e00e8b0a3c] Running
	I1003 19:37:33.935536  469677 system_pods.go:89] "kindnet-7zwct" [bd0ecfeb-3764-425f-b7ae-e6f5b3e161d8] Running
	I1003 19:37:33.935550  469677 system_pods.go:89] "kube-apiserver-no-preload-643397" [6e4aa6fd-218d-45ce-a0d9-a1736936d2d3] Running
	I1003 19:37:33.935556  469677 system_pods.go:89] "kube-controller-manager-no-preload-643397" [29843b74-a1d2-46af-ac5e-06f4d53a0ac4] Running
	I1003 19:37:33.935561  469677 system_pods.go:89] "kube-proxy-lcs2q" [f25c0891-1202-477f-9cc9-5e41c3f1b9fb] Running
	I1003 19:37:33.935566  469677 system_pods.go:89] "kube-scheduler-no-preload-643397" [6865d4a0-3590-465e-81e1-927d271170c0] Running
	I1003 19:37:33.935579  469677 system_pods.go:89] "storage-provisioner" [355c16e4-3158-4ffc-9379-57747ed71cca] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1003 19:37:33.935626  469677 retry.go:31] will retry after 295.140191ms: missing components: kube-dns
	I1003 19:37:34.235258  469677 system_pods.go:86] 8 kube-system pods found
	I1003 19:37:34.235294  469677 system_pods.go:89] "coredns-66bc5c9577-h8n5p" [d7f4ec9d-9c68-4332-b6c7-e52f424dcd1e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1003 19:37:34.235302  469677 system_pods.go:89] "etcd-no-preload-643397" [642f5548-1caf-4bb4-9780-63e00e8b0a3c] Running
	I1003 19:37:34.235309  469677 system_pods.go:89] "kindnet-7zwct" [bd0ecfeb-3764-425f-b7ae-e6f5b3e161d8] Running
	I1003 19:37:34.235339  469677 system_pods.go:89] "kube-apiserver-no-preload-643397" [6e4aa6fd-218d-45ce-a0d9-a1736936d2d3] Running
	I1003 19:37:34.235353  469677 system_pods.go:89] "kube-controller-manager-no-preload-643397" [29843b74-a1d2-46af-ac5e-06f4d53a0ac4] Running
	I1003 19:37:34.235358  469677 system_pods.go:89] "kube-proxy-lcs2q" [f25c0891-1202-477f-9cc9-5e41c3f1b9fb] Running
	I1003 19:37:34.235362  469677 system_pods.go:89] "kube-scheduler-no-preload-643397" [6865d4a0-3590-465e-81e1-927d271170c0] Running
	I1003 19:37:34.235368  469677 system_pods.go:89] "storage-provisioner" [355c16e4-3158-4ffc-9379-57747ed71cca] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1003 19:37:34.235401  469677 retry.go:31] will retry after 248.460437ms: missing components: kube-dns
	I1003 19:37:34.489309  469677 system_pods.go:86] 8 kube-system pods found
	I1003 19:37:34.489347  469677 system_pods.go:89] "coredns-66bc5c9577-h8n5p" [d7f4ec9d-9c68-4332-b6c7-e52f424dcd1e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1003 19:37:34.489354  469677 system_pods.go:89] "etcd-no-preload-643397" [642f5548-1caf-4bb4-9780-63e00e8b0a3c] Running
	I1003 19:37:34.489361  469677 system_pods.go:89] "kindnet-7zwct" [bd0ecfeb-3764-425f-b7ae-e6f5b3e161d8] Running
	I1003 19:37:34.489385  469677 system_pods.go:89] "kube-apiserver-no-preload-643397" [6e4aa6fd-218d-45ce-a0d9-a1736936d2d3] Running
	I1003 19:37:34.489390  469677 system_pods.go:89] "kube-controller-manager-no-preload-643397" [29843b74-a1d2-46af-ac5e-06f4d53a0ac4] Running
	I1003 19:37:34.489395  469677 system_pods.go:89] "kube-proxy-lcs2q" [f25c0891-1202-477f-9cc9-5e41c3f1b9fb] Running
	I1003 19:37:34.489404  469677 system_pods.go:89] "kube-scheduler-no-preload-643397" [6865d4a0-3590-465e-81e1-927d271170c0] Running
	I1003 19:37:34.489412  469677 system_pods.go:89] "storage-provisioner" [355c16e4-3158-4ffc-9379-57747ed71cca] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1003 19:37:34.489427  469677 retry.go:31] will retry after 349.773107ms: missing components: kube-dns
	I1003 19:37:34.842556  469677 system_pods.go:86] 8 kube-system pods found
	I1003 19:37:34.842590  469677 system_pods.go:89] "coredns-66bc5c9577-h8n5p" [d7f4ec9d-9c68-4332-b6c7-e52f424dcd1e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1003 19:37:34.842597  469677 system_pods.go:89] "etcd-no-preload-643397" [642f5548-1caf-4bb4-9780-63e00e8b0a3c] Running
	I1003 19:37:34.842604  469677 system_pods.go:89] "kindnet-7zwct" [bd0ecfeb-3764-425f-b7ae-e6f5b3e161d8] Running
	I1003 19:37:34.842609  469677 system_pods.go:89] "kube-apiserver-no-preload-643397" [6e4aa6fd-218d-45ce-a0d9-a1736936d2d3] Running
	I1003 19:37:34.842617  469677 system_pods.go:89] "kube-controller-manager-no-preload-643397" [29843b74-a1d2-46af-ac5e-06f4d53a0ac4] Running
	I1003 19:37:34.842621  469677 system_pods.go:89] "kube-proxy-lcs2q" [f25c0891-1202-477f-9cc9-5e41c3f1b9fb] Running
	I1003 19:37:34.842632  469677 system_pods.go:89] "kube-scheduler-no-preload-643397" [6865d4a0-3590-465e-81e1-927d271170c0] Running
	I1003 19:37:34.842638  469677 system_pods.go:89] "storage-provisioner" [355c16e4-3158-4ffc-9379-57747ed71cca] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1003 19:37:34.842653  469677 retry.go:31] will retry after 478.014809ms: missing components: kube-dns
	I1003 19:37:35.324852  469677 system_pods.go:86] 8 kube-system pods found
	I1003 19:37:35.324885  469677 system_pods.go:89] "coredns-66bc5c9577-h8n5p" [d7f4ec9d-9c68-4332-b6c7-e52f424dcd1e] Running
	I1003 19:37:35.324892  469677 system_pods.go:89] "etcd-no-preload-643397" [642f5548-1caf-4bb4-9780-63e00e8b0a3c] Running
	I1003 19:37:35.324897  469677 system_pods.go:89] "kindnet-7zwct" [bd0ecfeb-3764-425f-b7ae-e6f5b3e161d8] Running
	I1003 19:37:35.324905  469677 system_pods.go:89] "kube-apiserver-no-preload-643397" [6e4aa6fd-218d-45ce-a0d9-a1736936d2d3] Running
	I1003 19:37:35.324940  469677 system_pods.go:89] "kube-controller-manager-no-preload-643397" [29843b74-a1d2-46af-ac5e-06f4d53a0ac4] Running
	I1003 19:37:35.324953  469677 system_pods.go:89] "kube-proxy-lcs2q" [f25c0891-1202-477f-9cc9-5e41c3f1b9fb] Running
	I1003 19:37:35.324958  469677 system_pods.go:89] "kube-scheduler-no-preload-643397" [6865d4a0-3590-465e-81e1-927d271170c0] Running
	I1003 19:37:35.324962  469677 system_pods.go:89] "storage-provisioner" [355c16e4-3158-4ffc-9379-57747ed71cca] Running
	I1003 19:37:35.324969  469677 system_pods.go:126] duration metric: took 1.395856253s to wait for k8s-apps to be running ...
	I1003 19:37:35.324982  469677 system_svc.go:44] waiting for kubelet service to be running ....
	I1003 19:37:35.325049  469677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 19:37:35.338955  469677 system_svc.go:56] duration metric: took 13.963268ms WaitForService to wait for kubelet
	I1003 19:37:35.339034  469677 kubeadm.go:586] duration metric: took 16.034355182s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 19:37:35.339070  469677 node_conditions.go:102] verifying NodePressure condition ...
	I1003 19:37:35.342074  469677 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1003 19:37:35.342109  469677 node_conditions.go:123] node cpu capacity is 2
	I1003 19:37:35.342126  469677 node_conditions.go:105] duration metric: took 3.043245ms to run NodePressure ...
	I1003 19:37:35.342138  469677 start.go:241] waiting for startup goroutines ...
	I1003 19:37:35.342146  469677 start.go:246] waiting for cluster config update ...
	I1003 19:37:35.342158  469677 start.go:255] writing updated cluster config ...
	I1003 19:37:35.342457  469677 ssh_runner.go:195] Run: rm -f paused
	I1003 19:37:35.346951  469677 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1003 19:37:35.350667  469677 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-h8n5p" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:37:35.355997  469677 pod_ready.go:94] pod "coredns-66bc5c9577-h8n5p" is "Ready"
	I1003 19:37:35.356030  469677 pod_ready.go:86] duration metric: took 5.334275ms for pod "coredns-66bc5c9577-h8n5p" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:37:35.358383  469677 pod_ready.go:83] waiting for pod "etcd-no-preload-643397" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:37:35.363206  469677 pod_ready.go:94] pod "etcd-no-preload-643397" is "Ready"
	I1003 19:37:35.363231  469677 pod_ready.go:86] duration metric: took 4.821224ms for pod "etcd-no-preload-643397" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:37:35.366173  469677 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-643397" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:37:35.370975  469677 pod_ready.go:94] pod "kube-apiserver-no-preload-643397" is "Ready"
	I1003 19:37:35.371012  469677 pod_ready.go:86] duration metric: took 4.811206ms for pod "kube-apiserver-no-preload-643397" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:37:35.375547  469677 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-643397" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:37:35.751762  469677 pod_ready.go:94] pod "kube-controller-manager-no-preload-643397" is "Ready"
	I1003 19:37:35.751787  469677 pod_ready.go:86] duration metric: took 376.212677ms for pod "kube-controller-manager-no-preload-643397" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:37:35.951184  469677 pod_ready.go:83] waiting for pod "kube-proxy-lcs2q" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:37:36.350602  469677 pod_ready.go:94] pod "kube-proxy-lcs2q" is "Ready"
	I1003 19:37:36.350635  469677 pod_ready.go:86] duration metric: took 399.421484ms for pod "kube-proxy-lcs2q" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:37:36.550913  469677 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-643397" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:37:36.951534  469677 pod_ready.go:94] pod "kube-scheduler-no-preload-643397" is "Ready"
	I1003 19:37:36.951574  469677 pod_ready.go:86] duration metric: took 400.633013ms for pod "kube-scheduler-no-preload-643397" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:37:36.951587  469677 pod_ready.go:40] duration metric: took 1.604603534s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1003 19:37:37.024926  469677 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1003 19:37:37.028838  469677 out.go:179] * Done! kubectl is now configured to use "no-preload-643397" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 03 19:37:29 old-k8s-version-174543 crio[653]: time="2025-10-03T19:37:29.696899174Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 03 19:37:29 old-k8s-version-174543 crio[653]: time="2025-10-03T19:37:29.702927415Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 03 19:37:29 old-k8s-version-174543 crio[653]: time="2025-10-03T19:37:29.702961992Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 03 19:37:29 old-k8s-version-174543 crio[653]: time="2025-10-03T19:37:29.702983391Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 03 19:37:29 old-k8s-version-174543 crio[653]: time="2025-10-03T19:37:29.706498799Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 03 19:37:29 old-k8s-version-174543 crio[653]: time="2025-10-03T19:37:29.706530972Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 03 19:37:29 old-k8s-version-174543 crio[653]: time="2025-10-03T19:37:29.706552101Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 03 19:37:29 old-k8s-version-174543 crio[653]: time="2025-10-03T19:37:29.70958829Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 03 19:37:29 old-k8s-version-174543 crio[653]: time="2025-10-03T19:37:29.709624032Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 03 19:37:29 old-k8s-version-174543 crio[653]: time="2025-10-03T19:37:29.709649123Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 03 19:37:29 old-k8s-version-174543 crio[653]: time="2025-10-03T19:37:29.713189779Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 03 19:37:29 old-k8s-version-174543 crio[653]: time="2025-10-03T19:37:29.713222403Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 03 19:37:36 old-k8s-version-174543 crio[653]: time="2025-10-03T19:37:36.373425097Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=4a773727-58fc-4840-bf48-3138bc9db99e name=/runtime.v1.ImageService/ImageStatus
	Oct 03 19:37:36 old-k8s-version-174543 crio[653]: time="2025-10-03T19:37:36.374314784Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=c887ea95-2152-4339-aa97-887acb0a9f2a name=/runtime.v1.ImageService/ImageStatus
	Oct 03 19:37:36 old-k8s-version-174543 crio[653]: time="2025-10-03T19:37:36.375429902Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vfkv8/dashboard-metrics-scraper" id=8c17fc94-f414-425b-91f5-8801aaff294a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:37:36 old-k8s-version-174543 crio[653]: time="2025-10-03T19:37:36.375634688Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 19:37:36 old-k8s-version-174543 crio[653]: time="2025-10-03T19:37:36.384517824Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 19:37:36 old-k8s-version-174543 crio[653]: time="2025-10-03T19:37:36.385151565Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 19:37:36 old-k8s-version-174543 crio[653]: time="2025-10-03T19:37:36.414334482Z" level=info msg="Created container c2d2e81f1c95c24f945e4ca4a6f6e6308d203a2030802e620a0adb06b519a7d2: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vfkv8/dashboard-metrics-scraper" id=8c17fc94-f414-425b-91f5-8801aaff294a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:37:36 old-k8s-version-174543 crio[653]: time="2025-10-03T19:37:36.415434231Z" level=info msg="Starting container: c2d2e81f1c95c24f945e4ca4a6f6e6308d203a2030802e620a0adb06b519a7d2" id=713b1eb0-4bb0-4111-8ad3-5d0da382113b name=/runtime.v1.RuntimeService/StartContainer
	Oct 03 19:37:36 old-k8s-version-174543 crio[653]: time="2025-10-03T19:37:36.417256757Z" level=info msg="Started container" PID=1703 containerID=c2d2e81f1c95c24f945e4ca4a6f6e6308d203a2030802e620a0adb06b519a7d2 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vfkv8/dashboard-metrics-scraper id=713b1eb0-4bb0-4111-8ad3-5d0da382113b name=/runtime.v1.RuntimeService/StartContainer sandboxID=30463e946e653a6d9481df30b6a6f942304353af5b615475044b4ca1f702db33
	Oct 03 19:37:36 old-k8s-version-174543 conmon[1699]: conmon c2d2e81f1c95c24f945e <ninfo>: container 1703 exited with status 1
	Oct 03 19:37:36 old-k8s-version-174543 crio[653]: time="2025-10-03T19:37:36.7780258Z" level=info msg="Removing container: 9641d990cd3d20c343b9117d55b8144f7a0bcf421422c6cb22409e21e8da9cf7" id=5ea0def3-0f3c-466e-9292-5cb80b4ab322 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 03 19:37:36 old-k8s-version-174543 crio[653]: time="2025-10-03T19:37:36.785184414Z" level=info msg="Error loading conmon cgroup of container 9641d990cd3d20c343b9117d55b8144f7a0bcf421422c6cb22409e21e8da9cf7: cgroup deleted" id=5ea0def3-0f3c-466e-9292-5cb80b4ab322 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 03 19:37:36 old-k8s-version-174543 crio[653]: time="2025-10-03T19:37:36.78863057Z" level=info msg="Removed container 9641d990cd3d20c343b9117d55b8144f7a0bcf421422c6cb22409e21e8da9cf7: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vfkv8/dashboard-metrics-scraper" id=5ea0def3-0f3c-466e-9292-5cb80b4ab322 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	c2d2e81f1c95c       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           12 seconds ago       Exited              dashboard-metrics-scraper   2                   30463e946e653       dashboard-metrics-scraper-5f989dc9cf-vfkv8       kubernetes-dashboard
	299e25627798d       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           28 seconds ago       Running             storage-provisioner         2                   ac2360cd7dfe9       storage-provisioner                              kube-system
	d250f6446c88c       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   39 seconds ago       Running             kubernetes-dashboard        0                   bf45efee6adea       kubernetes-dashboard-8694d4445c-4tgnz            kubernetes-dashboard
	edf79b93e4b38       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           59 seconds ago       Running             coredns                     1                   655d88fe34d01       coredns-5dd5756b68-6grkm                         kube-system
	ed93641b7305e       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           59 seconds ago       Exited              storage-provisioner         1                   ac2360cd7dfe9       storage-provisioner                              kube-system
	8546643fba7e5       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           59 seconds ago       Running             busybox                     1                   ac73651a8544b       busybox                                          default
	b0164ebd7fa62       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           59 seconds ago       Running             kindnet-cni                 1                   0fbb63c13f83e       kindnet-rwdd6                                    kube-system
	07e35fb642fb1       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           59 seconds ago       Running             kube-proxy                  1                   9ce34b7484cc6       kube-proxy-v4mqk                                 kube-system
	9d777d7ca3f3a       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           About a minute ago   Running             kube-apiserver              1                   d4ad0dd3afe72       kube-apiserver-old-k8s-version-174543            kube-system
	fc8be4f0125f4       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           About a minute ago   Running             kube-scheduler              1                   4dfba0ba15d84       kube-scheduler-old-k8s-version-174543            kube-system
	5178fc63373a8       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           About a minute ago   Running             etcd                        1                   b445848275834       etcd-old-k8s-version-174543                      kube-system
	62ef8d10feba1       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           About a minute ago   Running             kube-controller-manager     1                   a25cd200cb3dd       kube-controller-manager-old-k8s-version-174543   kube-system
	
	
	==> coredns [edf79b93e4b38e2ee91c81e9e314756148e9674922f93889028ee8c7ecc4ef9d] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:60799 - 49190 "HINFO IN 1614990082667808264.2296963525466293270. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020841481s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-174543
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-174543
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a43873c79fc22f8b1ccd29d3dfa635d392b09335
	                    minikube.k8s.io/name=old-k8s-version-174543
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_03T19_35_35_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 03 Oct 2025 19:35:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-174543
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 03 Oct 2025 19:37:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 03 Oct 2025 19:37:39 +0000   Fri, 03 Oct 2025 19:35:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 03 Oct 2025 19:37:39 +0000   Fri, 03 Oct 2025 19:35:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 03 Oct 2025 19:37:39 +0000   Fri, 03 Oct 2025 19:35:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 03 Oct 2025 19:37:39 +0000   Fri, 03 Oct 2025 19:36:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-174543
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 781e6cd6dcfe4176a1510d7d87dc61ef
	  System UUID:                d17a7f15-898a-43d2-a8ef-eaca6b0b9649
	  Boot ID:                    3762136e-8bec-4104-a5cb-0b1976f6048e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 coredns-5dd5756b68-6grkm                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m2s
	  kube-system                 etcd-old-k8s-version-174543                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m15s
	  kube-system                 kindnet-rwdd6                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m2s
	  kube-system                 kube-apiserver-old-k8s-version-174543             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m17s
	  kube-system                 kube-controller-manager-old-k8s-version-174543    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m15s
	  kube-system                 kube-proxy-v4mqk                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 kube-scheduler-old-k8s-version-174543             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m15s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-vfkv8        0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-4tgnz             0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 2m                 kube-proxy       
	  Normal  Starting                 57s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m15s              kubelet          Node old-k8s-version-174543 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m15s              kubelet          Node old-k8s-version-174543 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m15s              kubelet          Node old-k8s-version-174543 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m15s              kubelet          Starting kubelet.
	  Normal  RegisteredNode           2m3s               node-controller  Node old-k8s-version-174543 event: Registered Node old-k8s-version-174543 in Controller
	  Normal  NodeReady                108s               kubelet          Node old-k8s-version-174543 status is now: NodeReady
	  Normal  Starting                 70s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  70s (x8 over 70s)  kubelet          Node old-k8s-version-174543 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    70s (x8 over 70s)  kubelet          Node old-k8s-version-174543 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     70s (x8 over 70s)  kubelet          Node old-k8s-version-174543 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           48s                node-controller  Node old-k8s-version-174543 event: Registered Node old-k8s-version-174543 in Controller
	
	
	==> dmesg <==
	[Oct 3 19:07] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:08] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:09] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:10] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:11] overlayfs: idmapped layers are currently not supported
	[  +4.287643] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:12] overlayfs: idmapped layers are currently not supported
	[ +24.839009] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:13] overlayfs: idmapped layers are currently not supported
	[ +26.493253] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:15] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:16] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:17] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000010] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Oct 3 19:18] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:20] overlayfs: idmapped layers are currently not supported
	[ +32.018892] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:22] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:24] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:26] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:32] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:34] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:35] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:36] overlayfs: idmapped layers are currently not supported
	[  +4.740983] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [5178fc63373a85b7ab0aa3b1194bd3b13ba6e413c7f9fcf141e7a055caeea3d9] <==
	{"level":"info","ts":"2025-10-03T19:36:40.953045Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-03T19:36:40.928856Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-03T19:36:40.953085Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-03T19:36:40.928942Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-10-03T19:36:40.929244Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-10-03T19:36:40.953206Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-10-03T19:36:40.953287Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-03T19:36:40.953313Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-03T19:36:40.929331Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-03T19:36:40.968516Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-03T19:36:40.968563Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-03T19:36:41.968785Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-03T19:36:41.968903Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-03T19:36:41.968958Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-10-03T19:36:41.969Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-10-03T19:36:41.969033Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-03T19:36:41.96907Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-10-03T19:36:41.969098Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-03T19:36:41.978279Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-174543 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-03T19:36:41.978481Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-03T19:36:41.979529Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-10-03T19:36:41.996219Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-03T19:36:41.997372Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-03T19:36:41.997501Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-03T19:36:41.997535Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 19:37:49 up  2:20,  0 user,  load average: 5.21, 2.57, 2.05
	Linux old-k8s-version-174543 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b0164ebd7fa623d22d654d8c31fba34f430360c496ed08d6a01ebbe6ad7fa8fd] <==
	I1003 19:36:49.423918       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1003 19:36:49.430516       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1003 19:36:49.430724       1 main.go:148] setting mtu 1500 for CNI 
	I1003 19:36:49.430766       1 main.go:178] kindnetd IP family: "ipv4"
	I1003 19:36:49.430798       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-03T19:36:49Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1003 19:36:49.694805       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1003 19:36:49.705061       1 controller.go:381] "Waiting for informer caches to sync"
	I1003 19:36:49.705186       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1003 19:36:49.706084       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1003 19:37:19.695333       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1003 19:37:19.705855       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1003 19:37:19.706072       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1003 19:37:19.706200       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1003 19:37:21.305902       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1003 19:37:21.305992       1 metrics.go:72] Registering metrics
	I1003 19:37:21.306087       1 controller.go:711] "Syncing nftables rules"
	I1003 19:37:29.695171       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1003 19:37:29.696577       1 main.go:301] handling current node
	I1003 19:37:39.694408       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1003 19:37:39.694442       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9d777d7ca3f3aae2a67724d1a6f8ab7dbc9844b33527c107ab163508dd940d95] <==
	I1003 19:36:47.918105       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1003 19:36:47.939329       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1003 19:36:47.939695       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1003 19:36:47.939716       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1003 19:36:47.939836       1 shared_informer.go:318] Caches are synced for configmaps
	I1003 19:36:47.939913       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1003 19:36:47.961522       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1003 19:36:47.962155       1 aggregator.go:166] initial CRD sync complete...
	I1003 19:36:47.962178       1 autoregister_controller.go:141] Starting autoregister controller
	I1003 19:36:47.962185       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1003 19:36:47.962192       1 cache.go:39] Caches are synced for autoregister controller
	I1003 19:36:47.993101       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1003 19:36:47.996491       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	E1003 19:36:48.135017       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1003 19:36:48.449046       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1003 19:36:51.856824       1 controller.go:624] quota admission added evaluator for: namespaces
	I1003 19:36:51.921354       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1003 19:36:51.955474       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1003 19:36:51.967957       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1003 19:36:51.981401       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1003 19:36:52.042140       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.61.42"}
	I1003 19:36:52.067205       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.37.125"}
	I1003 19:37:01.316612       1 controller.go:624] quota admission added evaluator for: endpoints
	I1003 19:37:01.404261       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1003 19:37:01.459195       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [62ef8d10feba1f56202dc665fa46660c227322fdddf49c3e984ffb9430f54164] <==
	I1003 19:37:01.442241       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="101.548µs"
	I1003 19:37:01.476952       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I1003 19:37:01.476980       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5f989dc9cf to 1"
	I1003 19:37:01.516821       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-4tgnz"
	I1003 19:37:01.522750       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-vfkv8"
	I1003 19:37:01.542452       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="66.139841ms"
	I1003 19:37:01.550547       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="73.486287ms"
	I1003 19:37:01.565678       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="15.069786ms"
	I1003 19:37:01.566038       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="288.333µs"
	I1003 19:37:01.585047       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="97.963µs"
	I1003 19:37:01.616968       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="74.462081ms"
	I1003 19:37:01.634444       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="17.415125ms"
	I1003 19:37:01.634562       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="49.946µs"
	I1003 19:37:01.749947       1 shared_informer.go:318] Caches are synced for garbage collector
	I1003 19:37:01.827864       1 shared_informer.go:318] Caches are synced for garbage collector
	I1003 19:37:01.827995       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1003 19:37:09.756607       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="47.461609ms"
	I1003 19:37:09.756685       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="45.383µs"
	I1003 19:37:15.750037       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="69.4µs"
	I1003 19:37:16.758971       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="77.4µs"
	I1003 19:37:17.755086       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="326.585µs"
	I1003 19:37:28.272829       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="9.966293ms"
	I1003 19:37:28.273870       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="47.886µs"
	I1003 19:37:37.802365       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="78.713µs"
	I1003 19:37:41.867247       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="97.233µs"
	
	
	==> kube-proxy [07e35fb642fb1060de6f5b6fe3a20dcbf4caddf1bf2630c89f54858a905f5d85] <==
	I1003 19:36:50.510482       1 server_others.go:69] "Using iptables proxy"
	I1003 19:36:51.053521       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1003 19:36:51.443230       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1003 19:36:51.461538       1 server_others.go:152] "Using iptables Proxier"
	I1003 19:36:51.461584       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1003 19:36:51.461591       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1003 19:36:51.461620       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1003 19:36:51.461811       1 server.go:846] "Version info" version="v1.28.0"
	I1003 19:36:51.461820       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1003 19:36:51.463226       1 config.go:188] "Starting service config controller"
	I1003 19:36:51.463235       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1003 19:36:51.463252       1 config.go:97] "Starting endpoint slice config controller"
	I1003 19:36:51.463255       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1003 19:36:51.463607       1 config.go:315] "Starting node config controller"
	I1003 19:36:51.463613       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1003 19:36:51.568501       1 shared_informer.go:318] Caches are synced for node config
	I1003 19:36:51.571213       1 shared_informer.go:318] Caches are synced for service config
	I1003 19:36:51.571227       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [fc8be4f0125f487dca2dc76dd1220ac22ffcd4a1e02920fcc8ee321799717ac2] <==
	I1003 19:36:46.083156       1 serving.go:348] Generated self-signed cert in-memory
	I1003 19:36:51.588827       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1003 19:36:51.588854       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1003 19:36:51.598231       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1003 19:36:51.598318       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1003 19:36:51.598331       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1003 19:36:51.598346       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1003 19:36:51.599999       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1003 19:36:51.600011       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1003 19:36:51.600027       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1003 19:36:51.600031       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1003 19:36:51.699261       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1003 19:36:51.700541       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1003 19:36:51.700621       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 03 19:37:01 old-k8s-version-174543 kubelet[780]: I1003 19:37:01.649867     780 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/cc2e663a-4e2d-43a5-8475-8e8990ff0576-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-4tgnz\" (UID: \"cc2e663a-4e2d-43a5-8475-8e8990ff0576\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-4tgnz"
	Oct 03 19:37:01 old-k8s-version-174543 kubelet[780]: I1003 19:37:01.649969     780 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldflw\" (UniqueName: \"kubernetes.io/projected/cd8f6ac2-0026-47a9-a2dd-63a0e5a68a01-kube-api-access-ldflw\") pod \"dashboard-metrics-scraper-5f989dc9cf-vfkv8\" (UID: \"cd8f6ac2-0026-47a9-a2dd-63a0e5a68a01\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vfkv8"
	Oct 03 19:37:01 old-k8s-version-174543 kubelet[780]: I1003 19:37:01.650057     780 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vthz\" (UniqueName: \"kubernetes.io/projected/cc2e663a-4e2d-43a5-8475-8e8990ff0576-kube-api-access-6vthz\") pod \"kubernetes-dashboard-8694d4445c-4tgnz\" (UID: \"cc2e663a-4e2d-43a5-8475-8e8990ff0576\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-4tgnz"
	Oct 03 19:37:01 old-k8s-version-174543 kubelet[780]: W1003 19:37:01.901541     780 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/e396cf711cf72d67a3eb0308bfe582b67073d4549b3bd8af7083d99767f74cff/crio-30463e946e653a6d9481df30b6a6f942304353af5b615475044b4ca1f702db33 WatchSource:0}: Error finding container 30463e946e653a6d9481df30b6a6f942304353af5b615475044b4ca1f702db33: Status 404 returned error can't find the container with id 30463e946e653a6d9481df30b6a6f942304353af5b615475044b4ca1f702db33
	Oct 03 19:37:01 old-k8s-version-174543 kubelet[780]: W1003 19:37:01.904698     780 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/e396cf711cf72d67a3eb0308bfe582b67073d4549b3bd8af7083d99767f74cff/crio-bf45efee6adea7c85d48e135973f20b098923b9f1d3bfd414a2e11fa3ad3bef0 WatchSource:0}: Error finding container bf45efee6adea7c85d48e135973f20b098923b9f1d3bfd414a2e11fa3ad3bef0: Status 404 returned error can't find the container with id bf45efee6adea7c85d48e135973f20b098923b9f1d3bfd414a2e11fa3ad3bef0
	Oct 03 19:37:09 old-k8s-version-174543 kubelet[780]: I1003 19:37:09.726234     780 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-4tgnz" podStartSLOduration=1.560199058 podCreationTimestamp="2025-10-03 19:37:01 +0000 UTC" firstStartedPulling="2025-10-03 19:37:01.912855405 +0000 UTC m=+22.896315977" lastFinishedPulling="2025-10-03 19:37:09.07882354 +0000 UTC m=+30.062284113" observedRunningTime="2025-10-03 19:37:09.70652637 +0000 UTC m=+30.689986943" watchObservedRunningTime="2025-10-03 19:37:09.726167194 +0000 UTC m=+30.709627767"
	Oct 03 19:37:15 old-k8s-version-174543 kubelet[780]: I1003 19:37:15.723188     780 scope.go:117] "RemoveContainer" containerID="f973d70d4e5266065ddc121570af6d59a783002e373b03da02c022c8aaafc71b"
	Oct 03 19:37:16 old-k8s-version-174543 kubelet[780]: I1003 19:37:16.724363     780 scope.go:117] "RemoveContainer" containerID="9641d990cd3d20c343b9117d55b8144f7a0bcf421422c6cb22409e21e8da9cf7"
	Oct 03 19:37:16 old-k8s-version-174543 kubelet[780]: I1003 19:37:16.725545     780 scope.go:117] "RemoveContainer" containerID="f973d70d4e5266065ddc121570af6d59a783002e373b03da02c022c8aaafc71b"
	Oct 03 19:37:16 old-k8s-version-174543 kubelet[780]: E1003 19:37:16.731628     780 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-vfkv8_kubernetes-dashboard(cd8f6ac2-0026-47a9-a2dd-63a0e5a68a01)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vfkv8" podUID="cd8f6ac2-0026-47a9-a2dd-63a0e5a68a01"
	Oct 03 19:37:17 old-k8s-version-174543 kubelet[780]: I1003 19:37:17.727652     780 scope.go:117] "RemoveContainer" containerID="9641d990cd3d20c343b9117d55b8144f7a0bcf421422c6cb22409e21e8da9cf7"
	Oct 03 19:37:17 old-k8s-version-174543 kubelet[780]: E1003 19:37:17.727999     780 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-vfkv8_kubernetes-dashboard(cd8f6ac2-0026-47a9-a2dd-63a0e5a68a01)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vfkv8" podUID="cd8f6ac2-0026-47a9-a2dd-63a0e5a68a01"
	Oct 03 19:37:20 old-k8s-version-174543 kubelet[780]: I1003 19:37:20.734629     780 scope.go:117] "RemoveContainer" containerID="ed93641b7305ecc78cf05b71981a9b30e56f9dd16df2e6eb2b65f4cc3ef9c10b"
	Oct 03 19:37:21 old-k8s-version-174543 kubelet[780]: I1003 19:37:21.850559     780 scope.go:117] "RemoveContainer" containerID="9641d990cd3d20c343b9117d55b8144f7a0bcf421422c6cb22409e21e8da9cf7"
	Oct 03 19:37:21 old-k8s-version-174543 kubelet[780]: E1003 19:37:21.852475     780 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-vfkv8_kubernetes-dashboard(cd8f6ac2-0026-47a9-a2dd-63a0e5a68a01)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vfkv8" podUID="cd8f6ac2-0026-47a9-a2dd-63a0e5a68a01"
	Oct 03 19:37:36 old-k8s-version-174543 kubelet[780]: I1003 19:37:36.372813     780 scope.go:117] "RemoveContainer" containerID="9641d990cd3d20c343b9117d55b8144f7a0bcf421422c6cb22409e21e8da9cf7"
	Oct 03 19:37:36 old-k8s-version-174543 kubelet[780]: I1003 19:37:36.776906     780 scope.go:117] "RemoveContainer" containerID="9641d990cd3d20c343b9117d55b8144f7a0bcf421422c6cb22409e21e8da9cf7"
	Oct 03 19:37:37 old-k8s-version-174543 kubelet[780]: I1003 19:37:37.782961     780 scope.go:117] "RemoveContainer" containerID="c2d2e81f1c95c24f945e4ca4a6f6e6308d203a2030802e620a0adb06b519a7d2"
	Oct 03 19:37:37 old-k8s-version-174543 kubelet[780]: E1003 19:37:37.783241     780 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-vfkv8_kubernetes-dashboard(cd8f6ac2-0026-47a9-a2dd-63a0e5a68a01)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vfkv8" podUID="cd8f6ac2-0026-47a9-a2dd-63a0e5a68a01"
	Oct 03 19:37:41 old-k8s-version-174543 kubelet[780]: I1003 19:37:41.850666     780 scope.go:117] "RemoveContainer" containerID="c2d2e81f1c95c24f945e4ca4a6f6e6308d203a2030802e620a0adb06b519a7d2"
	Oct 03 19:37:41 old-k8s-version-174543 kubelet[780]: E1003 19:37:41.851024     780 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-vfkv8_kubernetes-dashboard(cd8f6ac2-0026-47a9-a2dd-63a0e5a68a01)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-vfkv8" podUID="cd8f6ac2-0026-47a9-a2dd-63a0e5a68a01"
	Oct 03 19:37:42 old-k8s-version-174543 kubelet[780]: I1003 19:37:42.389855     780 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 03 19:37:42 old-k8s-version-174543 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 03 19:37:42 old-k8s-version-174543 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 03 19:37:42 old-k8s-version-174543 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [d250f6446c88cc68c5a3d4d9876c5bdef89e65ab6fd74df4fbd79456c956c5d8] <==
	2025/10/03 19:37:09 Using namespace: kubernetes-dashboard
	2025/10/03 19:37:09 Using in-cluster config to connect to apiserver
	2025/10/03 19:37:09 Using secret token for csrf signing
	2025/10/03 19:37:09 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/03 19:37:09 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/03 19:37:09 Successful initial request to the apiserver, version: v1.28.0
	2025/10/03 19:37:09 Generating JWE encryption key
	2025/10/03 19:37:09 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/03 19:37:09 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/03 19:37:09 Initializing JWE encryption key from synchronized object
	2025/10/03 19:37:09 Creating in-cluster Sidecar client
	2025/10/03 19:37:09 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/03 19:37:09 Serving insecurely on HTTP port: 9090
	2025/10/03 19:37:39 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/03 19:37:09 Starting overwatch
	
	
	==> storage-provisioner [299e25627798dd200810afddc280b9b6853cae4ac0ac3aba81703a80b719f759] <==
	I1003 19:37:20.890965       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1003 19:37:20.932009       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1003 19:37:20.932112       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1003 19:37:38.334583       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1003 19:37:38.334999       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"dad5d048-770e-49bf-b234-9f07728495ef", APIVersion:"v1", ResourceVersion:"624", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-174543_b17a2419-e152-47d5-8985-5f3c7cfff74a became leader
	I1003 19:37:38.335736       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-174543_b17a2419-e152-47d5-8985-5f3c7cfff74a!
	I1003 19:37:38.447796       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-174543_b17a2419-e152-47d5-8985-5f3c7cfff74a!
	
	
	==> storage-provisioner [ed93641b7305ecc78cf05b71981a9b30e56f9dd16df2e6eb2b65f4cc3ef9c10b] <==
	I1003 19:36:49.843669       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1003 19:37:19.845634       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-174543 -n old-k8s-version-174543
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-174543 -n old-k8s-version-174543: exit status 2 (374.267256ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-174543 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (8.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (3.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-643397 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-643397 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (330.908145ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-03T19:37:45Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_logs_00302df19cf26dc43b03ea32978d5cabc189a5ea_3.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p no-preload-643397 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-643397 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-643397 describe deploy/metrics-server -n kube-system: exit status 1 (109.224326ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-643397 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-643397
helpers_test.go:243: (dbg) docker inspect no-preload-643397:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2ff626657df750cf9a1329bdf9d0fad13d27c9b5d259ea3feeee2866dd91e501",
	        "Created": "2025-10-03T19:36:25.722491125Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 469979,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-03T19:36:25.810751944Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/2ff626657df750cf9a1329bdf9d0fad13d27c9b5d259ea3feeee2866dd91e501/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2ff626657df750cf9a1329bdf9d0fad13d27c9b5d259ea3feeee2866dd91e501/hostname",
	        "HostsPath": "/var/lib/docker/containers/2ff626657df750cf9a1329bdf9d0fad13d27c9b5d259ea3feeee2866dd91e501/hosts",
	        "LogPath": "/var/lib/docker/containers/2ff626657df750cf9a1329bdf9d0fad13d27c9b5d259ea3feeee2866dd91e501/2ff626657df750cf9a1329bdf9d0fad13d27c9b5d259ea3feeee2866dd91e501-json.log",
	        "Name": "/no-preload-643397",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-643397:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-643397",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2ff626657df750cf9a1329bdf9d0fad13d27c9b5d259ea3feeee2866dd91e501",
	                "LowerDir": "/var/lib/docker/overlay2/75229aada1a7c5cdb860071c36cb7ed171994b4cb8c1ec0abce827b45a7f840c-init/diff:/var/lib/docker/overlay2/87b205803817b0b71a214d995ab7e10a92033bbf72d76d6e052f1d21ccecb313/diff",
	                "MergedDir": "/var/lib/docker/overlay2/75229aada1a7c5cdb860071c36cb7ed171994b4cb8c1ec0abce827b45a7f840c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/75229aada1a7c5cdb860071c36cb7ed171994b4cb8c1ec0abce827b45a7f840c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/75229aada1a7c5cdb860071c36cb7ed171994b4cb8c1ec0abce827b45a7f840c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-643397",
	                "Source": "/var/lib/docker/volumes/no-preload-643397/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-643397",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-643397",
	                "name.minikube.sigs.k8s.io": "no-preload-643397",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "603363bb1c5d17a667b8571472af70ba60938808e43b16fe905d2cf06c86fb10",
	            "SandboxKey": "/var/run/docker/netns/603363bb1c5d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33423"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33424"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33427"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33425"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33426"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-643397": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "82:7d:3c:11:4b:e9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f8dcbeddfcb1aa31ce25637ca1a7b831d4c9bab55d750a9a6b43e000061a3784",
	                    "EndpointID": "7060a3ca0c9703177e5f733c32af5333111f268a393ab1110ddfd62f6d58ba12",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-643397",
	                        "2ff626657df7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-643397 -n no-preload-643397
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-643397 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-643397 logs -n 25: (1.537547826s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────
────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────
────┤
	│ ssh     │ -p cilium-388132 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-388132             │ jenkins │ v1.37.0 │ 03 Oct 25 19:25 UTC │                     │
	│ ssh     │ -p cilium-388132 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-388132             │ jenkins │ v1.37.0 │ 03 Oct 25 19:25 UTC │                     │
	│ ssh     │ -p cilium-388132 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-388132             │ jenkins │ v1.37.0 │ 03 Oct 25 19:25 UTC │                     │
	│ ssh     │ -p cilium-388132 sudo crio config                                                                                                                                                                                                             │ cilium-388132             │ jenkins │ v1.37.0 │ 03 Oct 25 19:25 UTC │                     │
	│ delete  │ -p cilium-388132                                                                                                                                                                                                                              │ cilium-388132             │ jenkins │ v1.37.0 │ 03 Oct 25 19:25 UTC │ 03 Oct 25 19:25 UTC │
	│ start   │ -p force-systemd-env-159095 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-159095  │ jenkins │ v1.37.0 │ 03 Oct 25 19:25 UTC │                     │
	│ ssh     │ force-systemd-flag-855981 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                                          │ force-systemd-flag-855981 │ jenkins │ v1.37.0 │ 03 Oct 25 19:32 UTC │ 03 Oct 25 19:32 UTC │
	│ delete  │ -p force-systemd-flag-855981                                                                                                                                                                                                                  │ force-systemd-flag-855981 │ jenkins │ v1.37.0 │ 03 Oct 25 19:32 UTC │ 03 Oct 25 19:32 UTC │
	│ start   │ -p cert-expiration-324520 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-324520    │ jenkins │ v1.37.0 │ 03 Oct 25 19:32 UTC │ 03 Oct 25 19:33 UTC │
	│ delete  │ -p force-systemd-env-159095                                                                                                                                                                                                                   │ force-systemd-env-159095  │ jenkins │ v1.37.0 │ 03 Oct 25 19:34 UTC │ 03 Oct 25 19:34 UTC │
	│ start   │ -p cert-options-305866 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-305866       │ jenkins │ v1.37.0 │ 03 Oct 25 19:34 UTC │ 03 Oct 25 19:34 UTC │
	│ ssh     │ cert-options-305866 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-305866       │ jenkins │ v1.37.0 │ 03 Oct 25 19:34 UTC │ 03 Oct 25 19:34 UTC │
	│ ssh     │ -p cert-options-305866 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-305866       │ jenkins │ v1.37.0 │ 03 Oct 25 19:34 UTC │ 03 Oct 25 19:34 UTC │
	│ delete  │ -p cert-options-305866                                                                                                                                                                                                                        │ cert-options-305866       │ jenkins │ v1.37.0 │ 03 Oct 25 19:34 UTC │ 03 Oct 25 19:35 UTC │
	│ start   │ -p old-k8s-version-174543 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-174543    │ jenkins │ v1.37.0 │ 03 Oct 25 19:35 UTC │ 03 Oct 25 19:36 UTC │
	│ start   │ -p cert-expiration-324520 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-324520    │ jenkins │ v1.37.0 │ 03 Oct 25 19:36 UTC │ 03 Oct 25 19:36 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-174543 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-174543    │ jenkins │ v1.37.0 │ 03 Oct 25 19:36 UTC │                     │
	│ stop    │ -p old-k8s-version-174543 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-174543    │ jenkins │ v1.37.0 │ 03 Oct 25 19:36 UTC │ 03 Oct 25 19:36 UTC │
	│ delete  │ -p cert-expiration-324520                                                                                                                                                                                                                     │ cert-expiration-324520    │ jenkins │ v1.37.0 │ 03 Oct 25 19:36 UTC │ 03 Oct 25 19:36 UTC │
	│ start   │ -p no-preload-643397 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-643397         │ jenkins │ v1.37.0 │ 03 Oct 25 19:36 UTC │ 03 Oct 25 19:37 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-174543 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-174543    │ jenkins │ v1.37.0 │ 03 Oct 25 19:36 UTC │ 03 Oct 25 19:36 UTC │
	│ start   │ -p old-k8s-version-174543 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-174543    │ jenkins │ v1.37.0 │ 03 Oct 25 19:36 UTC │ 03 Oct 25 19:37 UTC │
	│ image   │ old-k8s-version-174543 image list --format=json                                                                                                                                                                                               │ old-k8s-version-174543    │ jenkins │ v1.37.0 │ 03 Oct 25 19:37 UTC │ 03 Oct 25 19:37 UTC │
	│ pause   │ -p old-k8s-version-174543 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-174543    │ jenkins │ v1.37.0 │ 03 Oct 25 19:37 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-643397 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-643397         │ jenkins │ v1.37.0 │ 03 Oct 25 19:37 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────
────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/03 19:36:30
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 19:36:30.250303  470831 out.go:360] Setting OutFile to fd 1 ...
	I1003 19:36:30.250494  470831 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 19:36:30.250523  470831 out.go:374] Setting ErrFile to fd 2...
	I1003 19:36:30.250546  470831 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 19:36:30.250819  470831 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-284583/.minikube/bin
	I1003 19:36:30.251259  470831 out.go:368] Setting JSON to false
	I1003 19:36:30.252174  470831 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8342,"bootTime":1759511849,"procs":166,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1003 19:36:30.252267  470831 start.go:140] virtualization:  
	I1003 19:36:30.257178  470831 out.go:179] * [old-k8s-version-174543] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1003 19:36:30.260325  470831 out.go:179]   - MINIKUBE_LOCATION=21625
	I1003 19:36:30.260401  470831 notify.go:220] Checking for updates...
	I1003 19:36:30.267120  470831 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 19:36:30.270199  470831 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21625-284583/kubeconfig
	I1003 19:36:30.276956  470831 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-284583/.minikube
	I1003 19:36:30.279893  470831 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1003 19:36:30.282916  470831 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 19:36:30.286374  470831 config.go:182] Loaded profile config "old-k8s-version-174543": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1003 19:36:30.289864  470831 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1003 19:36:30.292678  470831 driver.go:421] Setting default libvirt URI to qemu:///system
	I1003 19:36:30.336883  470831 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1003 19:36:30.337040  470831 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 19:36:30.414358  470831 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:46 OomKillDisable:true NGoroutines:60 SystemTime:2025-10-03 19:36:30.404346993 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1003 19:36:30.414469  470831 docker.go:318] overlay module found
	I1003 19:36:30.417827  470831 out.go:179] * Using the docker driver based on existing profile
	I1003 19:36:30.420720  470831 start.go:304] selected driver: docker
	I1003 19:36:30.420758  470831 start.go:924] validating driver "docker" against &{Name:old-k8s-version-174543 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-174543 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 19:36:30.420853  470831 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 19:36:30.421578  470831 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 19:36:30.506943  470831 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:46 OomKillDisable:true NGoroutines:60 SystemTime:2025-10-03 19:36:30.493477103 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1003 19:36:30.507327  470831 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 19:36:30.507368  470831 cni.go:84] Creating CNI manager for ""
	I1003 19:36:30.507434  470831 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 19:36:30.507477  470831 start.go:348] cluster config:
	{Name:old-k8s-version-174543 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-174543 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 19:36:30.510812  470831 out.go:179] * Starting "old-k8s-version-174543" primary control-plane node in "old-k8s-version-174543" cluster
	I1003 19:36:30.513670  470831 cache.go:123] Beginning downloading kic base image for docker with crio
	I1003 19:36:30.516637  470831 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1003 19:36:30.519439  470831 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1003 19:36:30.519507  470831 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21625-284583/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1003 19:36:30.519517  470831 cache.go:58] Caching tarball of preloaded images
	I1003 19:36:30.519513  470831 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1003 19:36:30.519599  470831 preload.go:233] Found /home/jenkins/minikube-integration/21625-284583/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1003 19:36:30.519608  470831 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1003 19:36:30.519724  470831 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/old-k8s-version-174543/config.json ...
	I1003 19:36:30.540975  470831 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1003 19:36:30.540996  470831 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1003 19:36:30.541009  470831 cache.go:232] Successfully downloaded all kic artifacts
	I1003 19:36:30.541031  470831 start.go:360] acquireMachinesLock for old-k8s-version-174543: {Name:mk19048ea0453627d87a673cd3a2fbc4574461a1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 19:36:30.541081  470831 start.go:364] duration metric: took 34.183µs to acquireMachinesLock for "old-k8s-version-174543"
	I1003 19:36:30.541100  470831 start.go:96] Skipping create...Using existing machine configuration
	I1003 19:36:30.541105  470831 fix.go:54] fixHost starting: 
	I1003 19:36:30.541364  470831 cli_runner.go:164] Run: docker container inspect old-k8s-version-174543 --format={{.State.Status}}
	I1003 19:36:30.557751  470831 fix.go:112] recreateIfNeeded on old-k8s-version-174543: state=Stopped err=<nil>
	W1003 19:36:30.557780  470831 fix.go:138] unexpected machine state, will restart: <nil>
	I1003 19:36:29.888287  469677 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-643397
	
	I1003 19:36:29.888312  469677 ubuntu.go:182] provisioning hostname "no-preload-643397"
	I1003 19:36:29.888373  469677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-643397
	I1003 19:36:29.911157  469677 main.go:141] libmachine: Using SSH client type: native
	I1003 19:36:29.911451  469677 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1003 19:36:29.911465  469677 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-643397 && echo "no-preload-643397" | sudo tee /etc/hostname
	I1003 19:36:30.097224  469677 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-643397
	
	I1003 19:36:30.097314  469677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-643397
	I1003 19:36:30.129074  469677 main.go:141] libmachine: Using SSH client type: native
	I1003 19:36:30.129399  469677 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1003 19:36:30.129417  469677 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-643397' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-643397/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-643397' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 19:36:30.275239  469677 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 19:36:30.275263  469677 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21625-284583/.minikube CaCertPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21625-284583/.minikube}
	I1003 19:36:30.275285  469677 ubuntu.go:190] setting up certificates
	I1003 19:36:30.275296  469677 provision.go:84] configureAuth start
	I1003 19:36:30.275356  469677 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-643397
	I1003 19:36:30.296110  469677 provision.go:143] copyHostCerts
	I1003 19:36:30.296190  469677 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-284583/.minikube/ca.pem, removing ...
	I1003 19:36:30.296200  469677 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-284583/.minikube/ca.pem
	I1003 19:36:30.296284  469677 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21625-284583/.minikube/ca.pem (1082 bytes)
	I1003 19:36:30.296395  469677 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-284583/.minikube/cert.pem, removing ...
	I1003 19:36:30.296404  469677 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-284583/.minikube/cert.pem
	I1003 19:36:30.296438  469677 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21625-284583/.minikube/cert.pem (1123 bytes)
	I1003 19:36:30.296491  469677 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-284583/.minikube/key.pem, removing ...
	I1003 19:36:30.296496  469677 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-284583/.minikube/key.pem
	I1003 19:36:30.296519  469677 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21625-284583/.minikube/key.pem (1675 bytes)
	I1003 19:36:30.296573  469677 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21625-284583/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca-key.pem org=jenkins.no-preload-643397 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-643397]
	I1003 19:36:31.243632  469677 provision.go:177] copyRemoteCerts
	I1003 19:36:31.243707  469677 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 19:36:31.243750  469677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-643397
	I1003 19:36:31.265968  469677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/no-preload-643397/id_rsa Username:docker}
	I1003 19:36:31.367118  469677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 19:36:31.394435  469677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1003 19:36:31.426437  469677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1003 19:36:31.460100  469677 provision.go:87] duration metric: took 1.18478156s to configureAuth
	I1003 19:36:31.460175  469677 ubuntu.go:206] setting minikube options for container-runtime
	I1003 19:36:31.460399  469677 config.go:182] Loaded profile config "no-preload-643397": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 19:36:31.460582  469677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-643397
	I1003 19:36:31.483776  469677 main.go:141] libmachine: Using SSH client type: native
	I1003 19:36:31.484112  469677 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33423 <nil> <nil>}
	I1003 19:36:31.484128  469677 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1003 19:36:31.741630  469677 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1003 19:36:31.741713  469677 machine.go:96] duration metric: took 5.061104012s to provisionDockerMachine
	I1003 19:36:31.741739  469677 client.go:171] duration metric: took 6.85414651s to LocalClient.Create
	I1003 19:36:31.741791  469677 start.go:167] duration metric: took 6.854271353s to libmachine.API.Create "no-preload-643397"
	I1003 19:36:31.741850  469677 start.go:293] postStartSetup for "no-preload-643397" (driver="docker")
	I1003 19:36:31.741878  469677 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 19:36:31.741973  469677 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 19:36:31.742040  469677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-643397
	I1003 19:36:31.759621  469677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/no-preload-643397/id_rsa Username:docker}
	I1003 19:36:31.856950  469677 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 19:36:31.860016  469677 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1003 19:36:31.860050  469677 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1003 19:36:31.860061  469677 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-284583/.minikube/addons for local assets ...
	I1003 19:36:31.860115  469677 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-284583/.minikube/files for local assets ...
	I1003 19:36:31.860195  469677 filesync.go:149] local asset: /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem -> 2864342.pem in /etc/ssl/certs
	I1003 19:36:31.860296  469677 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 19:36:31.867513  469677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem --> /etc/ssl/certs/2864342.pem (1708 bytes)
	I1003 19:36:31.885054  469677 start.go:296] duration metric: took 143.173249ms for postStartSetup
	I1003 19:36:31.885428  469677 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-643397
	I1003 19:36:31.902133  469677 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/config.json ...
	I1003 19:36:31.902412  469677 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 19:36:31.902472  469677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-643397
	I1003 19:36:31.918558  469677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/no-preload-643397/id_rsa Username:docker}
	I1003 19:36:32.012703  469677 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1003 19:36:32.018111  469677 start.go:128] duration metric: took 7.134271436s to createHost
	I1003 19:36:32.018135  469677 start.go:83] releasing machines lock for "no-preload-643397", held for 7.134409604s
	I1003 19:36:32.018208  469677 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-643397
	I1003 19:36:32.035359  469677 ssh_runner.go:195] Run: cat /version.json
	I1003 19:36:32.035416  469677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-643397
	I1003 19:36:32.035661  469677 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 19:36:32.035730  469677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-643397
	I1003 19:36:32.056813  469677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/no-preload-643397/id_rsa Username:docker}
	I1003 19:36:32.057019  469677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/no-preload-643397/id_rsa Username:docker}
	I1003 19:36:32.247781  469677 ssh_runner.go:195] Run: systemctl --version
	I1003 19:36:32.254306  469677 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1003 19:36:32.289494  469677 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1003 19:36:32.294123  469677 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 19:36:32.294252  469677 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 19:36:32.324165  469677 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1003 19:36:32.324188  469677 start.go:495] detecting cgroup driver to use...
	I1003 19:36:32.324220  469677 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1003 19:36:32.324271  469677 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 19:36:32.342515  469677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 19:36:32.355242  469677 docker.go:218] disabling cri-docker service (if available) ...
	I1003 19:36:32.355336  469677 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1003 19:36:32.373198  469677 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1003 19:36:32.393125  469677 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1003 19:36:32.514303  469677 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1003 19:36:32.631659  469677 docker.go:234] disabling docker service ...
	I1003 19:36:32.631788  469677 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1003 19:36:32.656370  469677 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1003 19:36:32.670863  469677 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1003 19:36:32.791284  469677 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1003 19:36:32.911277  469677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1003 19:36:32.924107  469677 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 19:36:32.938287  469677 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1003 19:36:32.938366  469677 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:36:32.946968  469677 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1003 19:36:32.947047  469677 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:36:32.955545  469677 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:36:32.964065  469677 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:36:32.972790  469677 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 19:36:32.980705  469677 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:36:32.989640  469677 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:36:33.004406  469677 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:36:33.016483  469677 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 19:36:33.024887  469677 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 19:36:33.032762  469677 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 19:36:33.145045  469677 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1003 19:36:33.274369  469677 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1003 19:36:33.274467  469677 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1003 19:36:33.278514  469677 start.go:563] Will wait 60s for crictl version
	I1003 19:36:33.278611  469677 ssh_runner.go:195] Run: which crictl
	I1003 19:36:33.282251  469677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1003 19:36:33.311593  469677 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1003 19:36:33.311722  469677 ssh_runner.go:195] Run: crio --version
	I1003 19:36:33.340238  469677 ssh_runner.go:195] Run: crio --version
	I1003 19:36:33.373021  469677 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1003 19:36:33.375998  469677 cli_runner.go:164] Run: docker network inspect no-preload-643397 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 19:36:33.391502  469677 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1003 19:36:33.395406  469677 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 19:36:33.405040  469677 kubeadm.go:883] updating cluster {Name:no-preload-643397 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-643397 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1003 19:36:33.405163  469677 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 19:36:33.405211  469677 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 19:36:33.431075  469677 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1003 19:36:33.431098  469677 cache_images.go:89] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1003 19:36:33.431180  469677 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 19:36:33.431390  469677 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1003 19:36:33.431484  469677 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1003 19:36:33.431563  469677 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1003 19:36:33.431666  469677 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1003 19:36:33.431762  469677 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1003 19:36:33.431843  469677 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1003 19:36:33.431979  469677 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1003 19:36:33.433411  469677 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1003 19:36:33.433668  469677 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 19:36:33.434250  469677 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1003 19:36:33.434497  469677 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1003 19:36:33.434701  469677 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1003 19:36:33.434887  469677 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1003 19:36:33.435088  469677 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1003 19:36:33.435250  469677 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1003 19:36:33.664277  469677 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.1
	I1003 19:36:33.664905  469677 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.4-0
	I1003 19:36:33.686754  469677 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.1
	I1003 19:36:33.688953  469677 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.1
	I1003 19:36:33.693910  469677 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1003 19:36:33.695245  469677 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1003 19:36:33.703603  469677 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.1
	I1003 19:36:33.727298  469677 cache_images.go:117] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196" in container runtime
	I1003 19:36:33.727341  469677 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1003 19:36:33.727413  469677 ssh_runner.go:195] Run: which crictl
	I1003 19:36:33.731888  469677 cache_images.go:117] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e" in container runtime
	I1003 19:36:33.731937  469677 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1003 19:36:33.732001  469677 ssh_runner.go:195] Run: which crictl
	I1003 19:36:33.808862  469677 cache_images.go:117] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9" in container runtime
	I1003 19:36:33.808934  469677 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1003 19:36:33.809006  469677 ssh_runner.go:195] Run: which crictl
	I1003 19:36:33.822519  469677 cache_images.go:117] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0" in container runtime
	I1003 19:36:33.822562  469677 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1003 19:36:33.822661  469677 ssh_runner.go:195] Run: which crictl
	I1003 19:36:33.826959  469677 cache_images.go:117] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc" in container runtime
	I1003 19:36:33.827026  469677 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1003 19:36:33.827082  469677 ssh_runner.go:195] Run: which crictl
	I1003 19:36:33.827187  469677 cache_images.go:117] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1003 19:36:33.827222  469677 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1003 19:36:33.827255  469677 ssh_runner.go:195] Run: which crictl
	I1003 19:36:33.829319  469677 cache_images.go:117] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a" in container runtime
	I1003 19:36:33.829388  469677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1003 19:36:33.829419  469677 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1003 19:36:33.829492  469677 ssh_runner.go:195] Run: which crictl
	I1003 19:36:33.829518  469677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1003 19:36:33.829334  469677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1003 19:36:33.836401  469677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1003 19:36:33.836515  469677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1003 19:36:33.838188  469677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1003 19:36:33.919978  469677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1003 19:36:33.920083  469677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1003 19:36:33.920154  469677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1003 19:36:33.920238  469677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1003 19:36:33.932206  469677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1003 19:36:33.932323  469677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1003 19:36:33.932391  469677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1003 19:36:34.020085  469677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1003 19:36:34.020207  469677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1003 19:36:34.020288  469677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1003 19:36:34.020365  469677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1003 19:36:34.049008  469677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1003 19:36:34.049126  469677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1003 19:36:34.049207  469677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1003 19:36:34.167904  469677 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1003 19:36:34.168055  469677 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1003 19:36:34.168144  469677 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1003 19:36:34.168224  469677 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1003 19:36:34.168292  469677 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1003 19:36:34.168427  469677 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1003 19:36:34.172013  469677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1003 19:36:34.179883  469677 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1003 19:36:34.179981  469677 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1003 19:36:34.180078  469677 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1003 19:36:34.180122  469677 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1003 19:36:34.180194  469677 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1003 19:36:34.180226  469677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (24581632 bytes)
	I1003 19:36:34.180259  469677 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1003 19:36:34.180281  469677 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1003 19:36:34.180325  469677 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1003 19:36:34.180368  469677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (98216960 bytes)
	I1003 19:36:34.180454  469677 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1003 19:36:34.180478  469677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (22790144 bytes)
	I1003 19:36:34.280256  469677 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1003 19:36:34.280295  469677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (20402176 bytes)
	I1003 19:36:34.280348  469677 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1003 19:36:34.280365  469677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (15790592 bytes)
	I1003 19:36:34.280411  469677 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1003 19:36:34.280486  469677 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1003 19:36:34.280533  469677 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1003 19:36:34.280549  469677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	W1003 19:36:34.315289  469677 ssh_runner.go:129] session error, resetting client: ssh: rejected: connect failed (open failed)
	I1003 19:36:34.315337  469677 retry.go:31] will retry after 228.546049ms: ssh: rejected: connect failed (open failed)
	I1003 19:36:34.388834  469677 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1003 19:36:34.388883  469677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (20730880 bytes)
	I1003 19:36:34.388984  469677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-643397
	I1003 19:36:34.430781  469677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/no-preload-643397/id_rsa Username:docker}
	W1003 19:36:34.646437  469677 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1003 19:36:34.646672  469677 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 19:36:30.561067  470831 out.go:252] * Restarting existing docker container for "old-k8s-version-174543" ...
	I1003 19:36:30.561167  470831 cli_runner.go:164] Run: docker start old-k8s-version-174543
	I1003 19:36:30.899786  470831 cli_runner.go:164] Run: docker container inspect old-k8s-version-174543 --format={{.State.Status}}
	I1003 19:36:30.946093  470831 kic.go:430] container "old-k8s-version-174543" state is running.
	I1003 19:36:30.946478  470831 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-174543
	I1003 19:36:30.993439  470831 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/old-k8s-version-174543/config.json ...
	I1003 19:36:30.994728  470831 machine.go:93] provisionDockerMachine start ...
	I1003 19:36:30.994803  470831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-174543
	I1003 19:36:31.031278  470831 main.go:141] libmachine: Using SSH client type: native
	I1003 19:36:31.031607  470831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I1003 19:36:31.031621  470831 main.go:141] libmachine: About to run SSH command:
	hostname
	I1003 19:36:31.032316  470831 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44486->127.0.0.1:33428: read: connection reset by peer
	I1003 19:36:34.204180  470831 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-174543
	
	I1003 19:36:34.204274  470831 ubuntu.go:182] provisioning hostname "old-k8s-version-174543"
	I1003 19:36:34.204364  470831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-174543
	I1003 19:36:34.226862  470831 main.go:141] libmachine: Using SSH client type: native
	I1003 19:36:34.227164  470831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I1003 19:36:34.227176  470831 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-174543 && echo "old-k8s-version-174543" | sudo tee /etc/hostname
	I1003 19:36:34.402266  470831 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-174543
	
	I1003 19:36:34.402352  470831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-174543
	I1003 19:36:34.438692  470831 main.go:141] libmachine: Using SSH client type: native
	I1003 19:36:34.439122  470831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I1003 19:36:34.439145  470831 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-174543' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-174543/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-174543' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 19:36:34.605174  470831 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 19:36:34.605197  470831 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21625-284583/.minikube CaCertPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21625-284583/.minikube}
	I1003 19:36:34.605215  470831 ubuntu.go:190] setting up certificates
	I1003 19:36:34.605225  470831 provision.go:84] configureAuth start
	I1003 19:36:34.605292  470831 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-174543
	I1003 19:36:34.638381  470831 provision.go:143] copyHostCerts
	I1003 19:36:34.638446  470831 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-284583/.minikube/ca.pem, removing ...
	I1003 19:36:34.638463  470831 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-284583/.minikube/ca.pem
	I1003 19:36:34.638532  470831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21625-284583/.minikube/ca.pem (1082 bytes)
	I1003 19:36:34.638627  470831 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-284583/.minikube/cert.pem, removing ...
	I1003 19:36:34.638633  470831 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-284583/.minikube/cert.pem
	I1003 19:36:34.638661  470831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21625-284583/.minikube/cert.pem (1123 bytes)
	I1003 19:36:34.638725  470831 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-284583/.minikube/key.pem, removing ...
	I1003 19:36:34.638730  470831 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-284583/.minikube/key.pem
	I1003 19:36:34.638754  470831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21625-284583/.minikube/key.pem (1675 bytes)
	I1003 19:36:34.638805  470831 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21625-284583/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-174543 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-174543]
	I1003 19:36:35.486484  470831 provision.go:177] copyRemoteCerts
	I1003 19:36:35.486873  470831 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 19:36:35.486984  470831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-174543
	I1003 19:36:35.534150  470831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/old-k8s-version-174543/id_rsa Username:docker}
	I1003 19:36:35.650048  470831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 19:36:35.691502  470831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1003 19:36:35.733348  470831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1003 19:36:35.769920  470831 provision.go:87] duration metric: took 1.164682718s to configureAuth
	I1003 19:36:35.769944  470831 ubuntu.go:206] setting minikube options for container-runtime
	I1003 19:36:35.770141  470831 config.go:182] Loaded profile config "old-k8s-version-174543": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1003 19:36:35.770244  470831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-174543
	I1003 19:36:35.790817  470831 main.go:141] libmachine: Using SSH client type: native
	I1003 19:36:35.791140  470831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I1003 19:36:35.791162  470831 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1003 19:36:36.147469  470831 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1003 19:36:36.147497  470831 machine.go:96] duration metric: took 5.152751689s to provisionDockerMachine
	I1003 19:36:36.147509  470831 start.go:293] postStartSetup for "old-k8s-version-174543" (driver="docker")
	I1003 19:36:36.147542  470831 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 19:36:36.147641  470831 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 19:36:36.147697  470831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-174543
	I1003 19:36:36.177232  470831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/old-k8s-version-174543/id_rsa Username:docker}
	I1003 19:36:36.288843  470831 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 19:36:36.292704  470831 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1003 19:36:36.292790  470831 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1003 19:36:36.292816  470831 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-284583/.minikube/addons for local assets ...
	I1003 19:36:36.292902  470831 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-284583/.minikube/files for local assets ...
	I1003 19:36:36.293042  470831 filesync.go:149] local asset: /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem -> 2864342.pem in /etc/ssl/certs
	I1003 19:36:36.293214  470831 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 19:36:36.301319  470831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem --> /etc/ssl/certs/2864342.pem (1708 bytes)
	I1003 19:36:36.333038  470831 start.go:296] duration metric: took 185.510283ms for postStartSetup
	I1003 19:36:36.333203  470831 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 19:36:36.333279  470831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-174543
	I1003 19:36:36.386053  470831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/old-k8s-version-174543/id_rsa Username:docker}
	I1003 19:36:36.497817  470831 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1003 19:36:36.504278  470831 fix.go:56] duration metric: took 5.963165639s for fixHost
	I1003 19:36:36.504310  470831 start.go:83] releasing machines lock for "old-k8s-version-174543", held for 5.963220515s
	I1003 19:36:36.504391  470831 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-174543
	I1003 19:36:36.529637  470831 ssh_runner.go:195] Run: cat /version.json
	I1003 19:36:36.529696  470831 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 19:36:36.529769  470831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-174543
	I1003 19:36:36.529698  470831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-174543
	I1003 19:36:36.561759  470831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/old-k8s-version-174543/id_rsa Username:docker}
	I1003 19:36:36.573961  470831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/old-k8s-version-174543/id_rsa Username:docker}
	I1003 19:36:36.779306  470831 ssh_runner.go:195] Run: systemctl --version
	I1003 19:36:36.786533  470831 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1003 19:36:36.832494  470831 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1003 19:36:36.837907  470831 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 19:36:36.837987  470831 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 19:36:36.847208  470831 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1003 19:36:36.847261  470831 start.go:495] detecting cgroup driver to use...
	I1003 19:36:36.847295  470831 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1003 19:36:36.847354  470831 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 19:36:36.865816  470831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 19:36:36.880072  470831 docker.go:218] disabling cri-docker service (if available) ...
	I1003 19:36:36.880182  470831 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1003 19:36:36.897242  470831 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1003 19:36:36.911479  470831 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1003 19:36:37.052811  470831 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1003 19:36:37.188773  470831 docker.go:234] disabling docker service ...
	I1003 19:36:37.188916  470831 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1003 19:36:37.204769  470831 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1003 19:36:37.221757  470831 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1003 19:36:37.365939  470831 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1003 19:36:37.510943  470831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1003 19:36:37.524746  470831 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 19:36:37.543788  470831 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1003 19:36:37.543905  470831 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:36:37.554315  470831 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1003 19:36:37.554469  470831 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:36:37.564239  470831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:36:37.580279  470831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:36:37.595387  470831 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 19:36:37.603905  470831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:36:37.615691  470831 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:36:37.624764  470831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:36:37.633792  470831 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 19:36:37.642054  470831 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 19:36:37.651457  470831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 19:36:37.863516  470831 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1003 19:36:38.329902  470831 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1003 19:36:38.330025  470831 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1003 19:36:38.335449  470831 start.go:563] Will wait 60s for crictl version
	I1003 19:36:38.335577  470831 ssh_runner.go:195] Run: which crictl
	I1003 19:36:38.341293  470831 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1003 19:36:38.390604  470831 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1003 19:36:38.390763  470831 ssh_runner.go:195] Run: crio --version
	I1003 19:36:38.428125  470831 ssh_runner.go:195] Run: crio --version
	I1003 19:36:38.483368  470831 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1003 19:36:34.789323  469677 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1003 19:36:34.789417  469677 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1003 19:36:34.914066  469677 cache_images.go:117] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1003 19:36:34.914105  469677 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 19:36:34.914164  469677 ssh_runner.go:195] Run: which crictl
	I1003 19:36:35.250876  469677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 19:36:35.272126  469677 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1003 19:36:35.272225  469677 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1003 19:36:35.272326  469677 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1003 19:36:35.437308  469677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 19:36:37.594416  469677 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1: (2.322063589s)
	I1003 19:36:37.594439  469677 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1003 19:36:37.594455  469677 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1003 19:36:37.594503  469677 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1003 19:36:37.594555  469677 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.157226594s)
	I1003 19:36:37.594585  469677 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 19:36:38.486518  470831 cli_runner.go:164] Run: docker network inspect old-k8s-version-174543 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 19:36:38.506334  470831 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1003 19:36:38.511730  470831 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 19:36:38.529412  470831 kubeadm.go:883] updating cluster {Name:old-k8s-version-174543 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-174543 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1003 19:36:38.529522  470831 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1003 19:36:38.529576  470831 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 19:36:38.585748  470831 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 19:36:38.585776  470831 crio.go:433] Images already preloaded, skipping extraction
	I1003 19:36:38.585830  470831 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 19:36:38.628275  470831 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 19:36:38.628301  470831 cache_images.go:85] Images are preloaded, skipping loading
	I1003 19:36:38.628309  470831 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1003 19:36:38.628411  470831 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-174543 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-174543 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1003 19:36:38.628491  470831 ssh_runner.go:195] Run: crio config
	I1003 19:36:38.721955  470831 cni.go:84] Creating CNI manager for ""
	I1003 19:36:38.721980  470831 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 19:36:38.721998  470831 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1003 19:36:38.722029  470831 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-174543 NodeName:old-k8s-version-174543 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1003 19:36:38.722181  470831 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-174543"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 19:36:38.722270  470831 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1003 19:36:38.734990  470831 binaries.go:44] Found k8s binaries, skipping transfer
	I1003 19:36:38.735069  470831 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1003 19:36:38.743828  470831 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1003 19:36:38.757632  470831 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 19:36:38.773219  470831 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1003 19:36:38.788811  470831 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1003 19:36:38.792770  470831 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 19:36:38.807893  470831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 19:36:38.987564  470831 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 19:36:39.006441  470831 certs.go:69] Setting up /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/old-k8s-version-174543 for IP: 192.168.85.2
	I1003 19:36:39.006529  470831 certs.go:195] generating shared ca certs ...
	I1003 19:36:39.006560  470831 certs.go:227] acquiring lock for ca certs: {Name:mk5a10e6c921326e9c211447576eaeb893259ba7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:36:39.006788  470831 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21625-284583/.minikube/ca.key
	I1003 19:36:39.006870  470831 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.key
	I1003 19:36:39.006906  470831 certs.go:257] generating profile certs ...
	I1003 19:36:39.007047  470831 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/old-k8s-version-174543/client.key
	I1003 19:36:39.007163  470831 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/old-k8s-version-174543/apiserver.key.09eade1b
	I1003 19:36:39.007236  470831 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/old-k8s-version-174543/proxy-client.key
	I1003 19:36:39.007404  470831 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/286434.pem (1338 bytes)
	W1003 19:36:39.007468  470831 certs.go:480] ignoring /home/jenkins/minikube-integration/21625-284583/.minikube/certs/286434_empty.pem, impossibly tiny 0 bytes
	I1003 19:36:39.007494  470831 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 19:36:39.007563  470831 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem (1082 bytes)
	I1003 19:36:39.007612  470831 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem (1123 bytes)
	I1003 19:36:39.007665  470831 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/key.pem (1675 bytes)
	I1003 19:36:39.007744  470831 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem (1708 bytes)
	I1003 19:36:39.008444  470831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 19:36:39.070910  470831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1003 19:36:39.102477  470831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 19:36:39.131859  470831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1003 19:36:39.182220  470831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/old-k8s-version-174543/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1003 19:36:39.222848  470831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/old-k8s-version-174543/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1003 19:36:39.247686  470831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/old-k8s-version-174543/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 19:36:39.285222  470831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/old-k8s-version-174543/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1003 19:36:39.310065  470831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 19:36:39.341730  470831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/certs/286434.pem --> /usr/share/ca-certificates/286434.pem (1338 bytes)
	I1003 19:36:39.391536  470831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem --> /usr/share/ca-certificates/2864342.pem (1708 bytes)
	I1003 19:36:39.419719  470831 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 19:36:39.435425  470831 ssh_runner.go:195] Run: openssl version
	I1003 19:36:39.442930  470831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 19:36:39.453766  470831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 19:36:39.457959  470831 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  3 18:27 /usr/share/ca-certificates/minikubeCA.pem
	I1003 19:36:39.458064  470831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 19:36:39.503965  470831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 19:36:39.513478  470831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/286434.pem && ln -fs /usr/share/ca-certificates/286434.pem /etc/ssl/certs/286434.pem"
	I1003 19:36:39.521868  470831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/286434.pem
	I1003 19:36:39.526259  470831 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  3 18:34 /usr/share/ca-certificates/286434.pem
	I1003 19:36:39.526366  470831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/286434.pem
	I1003 19:36:39.576035  470831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/286434.pem /etc/ssl/certs/51391683.0"
	I1003 19:36:39.587037  470831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2864342.pem && ln -fs /usr/share/ca-certificates/2864342.pem /etc/ssl/certs/2864342.pem"
	I1003 19:36:39.596148  470831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2864342.pem
	I1003 19:36:39.600440  470831 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  3 18:34 /usr/share/ca-certificates/2864342.pem
	I1003 19:36:39.600506  470831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2864342.pem
	I1003 19:36:39.642070  470831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2864342.pem /etc/ssl/certs/3ec20f2e.0"
	I1003 19:36:39.650706  470831 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 19:36:39.654963  470831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1003 19:36:39.699817  470831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1003 19:36:39.741524  470831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1003 19:36:39.810137  470831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1003 19:36:39.867659  470831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1003 19:36:39.963823  470831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1003 19:36:40.065488  470831 kubeadm.go:400] StartCluster: {Name:old-k8s-version-174543 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-174543 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 19:36:40.065602  470831 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1003 19:36:40.065684  470831 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1003 19:36:40.150314  470831 cri.go:89] found id: "9d777d7ca3f3aae2a67724d1a6f8ab7dbc9844b33527c107ab163508dd940d95"
	I1003 19:36:40.150342  470831 cri.go:89] found id: "62ef8d10feba1f56202dc665fa46660c227322fdddf49c3e984ffb9430f54164"
	I1003 19:36:40.150348  470831 cri.go:89] found id: ""
	I1003 19:36:40.150431  470831 ssh_runner.go:195] Run: sudo runc list -f json
	W1003 19:36:40.209366  470831 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-03T19:36:40Z" level=error msg="open /run/runc: no such file or directory"
	I1003 19:36:40.209465  470831 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 19:36:40.238212  470831 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1003 19:36:40.238235  470831 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1003 19:36:40.238287  470831 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1003 19:36:40.309274  470831 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1003 19:36:40.309771  470831 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-174543" does not appear in /home/jenkins/minikube-integration/21625-284583/kubeconfig
	I1003 19:36:40.309937  470831 kubeconfig.go:62] /home/jenkins/minikube-integration/21625-284583/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-174543" cluster setting kubeconfig missing "old-k8s-version-174543" context setting]
	I1003 19:36:40.310734  470831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/kubeconfig: {Name:mkc1323fd87f4a78231a26d2dab0dff7feecf1e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:36:40.317747  470831 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1003 19:36:40.341224  470831 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1003 19:36:40.341310  470831 kubeadm.go:601] duration metric: took 103.068172ms to restartPrimaryControlPlane
	I1003 19:36:40.341334  470831 kubeadm.go:402] duration metric: took 275.871441ms to StartCluster
	I1003 19:36:40.341373  470831 settings.go:142] acquiring lock: {Name:mkc95577dbc448e3409dfa2b5e53a3a1327cb451 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:36:40.341463  470831 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21625-284583/kubeconfig
	I1003 19:36:40.342096  470831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/kubeconfig: {Name:mkc1323fd87f4a78231a26d2dab0dff7feecf1e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:36:40.342580  470831 config.go:182] Loaded profile config "old-k8s-version-174543": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1003 19:36:40.342648  470831 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1003 19:36:40.342700  470831 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1003 19:36:40.342845  470831 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-174543"
	I1003 19:36:40.342859  470831 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-174543"
	W1003 19:36:40.342865  470831 addons.go:247] addon storage-provisioner should already be in state true
	I1003 19:36:40.342887  470831 host.go:66] Checking if "old-k8s-version-174543" exists ...
	I1003 19:36:40.343383  470831 cli_runner.go:164] Run: docker container inspect old-k8s-version-174543 --format={{.State.Status}}
	I1003 19:36:40.343941  470831 addons.go:69] Setting dashboard=true in profile "old-k8s-version-174543"
	I1003 19:36:40.343965  470831 addons.go:238] Setting addon dashboard=true in "old-k8s-version-174543"
	W1003 19:36:40.343972  470831 addons.go:247] addon dashboard should already be in state true
	I1003 19:36:40.343995  470831 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-174543"
	I1003 19:36:40.344029  470831 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-174543"
	I1003 19:36:40.344003  470831 host.go:66] Checking if "old-k8s-version-174543" exists ...
	I1003 19:36:40.344381  470831 cli_runner.go:164] Run: docker container inspect old-k8s-version-174543 --format={{.State.Status}}
	I1003 19:36:40.344524  470831 cli_runner.go:164] Run: docker container inspect old-k8s-version-174543 --format={{.State.Status}}
	I1003 19:36:40.355866  470831 out.go:179] * Verifying Kubernetes components...
	I1003 19:36:40.368882  470831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 19:36:40.393921  470831 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-174543"
	W1003 19:36:40.393943  470831 addons.go:247] addon default-storageclass should already be in state true
	I1003 19:36:40.393969  470831 host.go:66] Checking if "old-k8s-version-174543" exists ...
	I1003 19:36:40.394399  470831 cli_runner.go:164] Run: docker container inspect old-k8s-version-174543 --format={{.State.Status}}
	I1003 19:36:40.408117  470831 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1003 19:36:40.411103  470831 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1003 19:36:40.414544  470831 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1003 19:36:40.414581  470831 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1003 19:36:40.414658  470831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-174543
	I1003 19:36:40.416772  470831 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 19:36:39.907186  469677 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.312579852s)
	I1003 19:36:39.907232  469677 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1003 19:36:39.907321  469677 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1003 19:36:39.907451  469677 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (2.312938627s)
	I1003 19:36:39.907466  469677 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1003 19:36:39.907481  469677 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1003 19:36:39.907512  469677 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1003 19:36:42.321165  469677 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (2.413625674s)
	I1003 19:36:42.321196  469677 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1003 19:36:42.321217  469677 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1003 19:36:42.321273  469677 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1003 19:36:42.321343  469677 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (2.414004815s)
	I1003 19:36:42.321363  469677 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1003 19:36:42.321381  469677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1003 19:36:44.503824  469677 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (2.182522238s)
	I1003 19:36:44.503853  469677 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1003 19:36:44.503873  469677 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1003 19:36:44.503931  469677 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1003 19:36:40.420887  470831 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 19:36:40.420912  470831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1003 19:36:40.420985  470831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-174543
	I1003 19:36:40.436447  470831 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1003 19:36:40.436474  470831 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1003 19:36:40.436538  470831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-174543
	I1003 19:36:40.468390  470831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/old-k8s-version-174543/id_rsa Username:docker}
	I1003 19:36:40.480958  470831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/old-k8s-version-174543/id_rsa Username:docker}
	I1003 19:36:40.491657  470831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/old-k8s-version-174543/id_rsa Username:docker}
	I1003 19:36:40.827254  470831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1003 19:36:40.871029  470831 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 19:36:40.871939  470831 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1003 19:36:40.871991  470831 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1003 19:36:40.905985  470831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 19:36:41.091414  470831 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1003 19:36:41.091481  470831 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1003 19:36:41.259108  470831 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1003 19:36:41.259190  470831 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1003 19:36:41.387179  470831 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1003 19:36:41.387248  470831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1003 19:36:41.463609  470831 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1003 19:36:41.463688  470831 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1003 19:36:41.521284  470831 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1003 19:36:41.521352  470831 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1003 19:36:41.571662  470831 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1003 19:36:41.571743  470831 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1003 19:36:41.606256  470831 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1003 19:36:41.606330  470831 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1003 19:36:41.633779  470831 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1003 19:36:41.633855  470831 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1003 19:36:41.682876  470831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1003 19:36:46.122072  469677 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.618115518s)
	I1003 19:36:46.122096  469677 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1003 19:36:46.122116  469677 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1003 19:36:46.122163  469677 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	I1003 19:36:50.486681  470831 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.659350106s)
	I1003 19:36:50.486868  470831 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (9.615770052s)
	I1003 19:36:50.486999  470831 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-174543" to be "Ready" ...
	I1003 19:36:50.563494  470831 node_ready.go:49] node "old-k8s-version-174543" is "Ready"
	I1003 19:36:50.563627  470831 node_ready.go:38] duration metric: took 76.592907ms for node "old-k8s-version-174543" to be "Ready" ...
	I1003 19:36:50.563657  470831 api_server.go:52] waiting for apiserver process to appear ...
	I1003 19:36:50.563753  470831 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 19:36:51.281166  470831 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.375104087s)
	I1003 19:36:52.074932  470831 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (10.391968891s)
	I1003 19:36:52.075163  470831 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.511377919s)
	I1003 19:36:52.075208  470831 api_server.go:72] duration metric: took 11.73243648s to wait for apiserver process to appear ...
	I1003 19:36:52.075222  470831 api_server.go:88] waiting for apiserver healthz status ...
	I1003 19:36:52.075241  470831 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1003 19:36:52.078448  470831 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-174543 addons enable metrics-server
	
	I1003 19:36:52.081625  470831 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1003 19:36:51.524837  469677 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (5.402654076s)
	I1003 19:36:51.524919  469677 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1003 19:36:51.524959  469677 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1003 19:36:51.525037  469677 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1003 19:36:52.294734  469677 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1003 19:36:52.294769  469677 cache_images.go:124] Successfully loaded all cached images
	I1003 19:36:52.294775  469677 cache_images.go:93] duration metric: took 18.863661907s to LoadCachedImages
	I1003 19:36:52.294786  469677 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1003 19:36:52.294879  469677 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-643397 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-643397 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1003 19:36:52.294960  469677 ssh_runner.go:195] Run: crio config
	I1003 19:36:52.364057  469677 cni.go:84] Creating CNI manager for ""
	I1003 19:36:52.364129  469677 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 19:36:52.364175  469677 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1003 19:36:52.364218  469677 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-643397 NodeName:no-preload-643397 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1003 19:36:52.364407  469677 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-643397"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 19:36:52.364517  469677 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1003 19:36:52.372571  469677 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1003 19:36:52.372685  469677 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1003 19:36:52.380593  469677 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
	I1003 19:36:52.380716  469677 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1003 19:36:52.380924  469677 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21625-284583/.minikube/cache/linux/arm64/v1.34.1/kubeadm
	I1003 19:36:52.381339  469677 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/21625-284583/.minikube/cache/linux/arm64/v1.34.1/kubelet
	I1003 19:36:52.386113  469677 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1003 19:36:52.386150  469677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/cache/linux/arm64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (58130616 bytes)
	I1003 19:36:53.545881  469677 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1003 19:36:53.549863  469677 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1003 19:36:53.549894  469677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/cache/linux/arm64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (71434424 bytes)
	I1003 19:36:53.709681  469677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 19:36:53.732427  469677 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1003 19:36:53.746177  469677 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1003 19:36:53.746216  469677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/cache/linux/arm64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (56426788 bytes)
	I1003 19:36:54.331746  469677 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1003 19:36:54.343207  469677 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1003 19:36:54.358285  469677 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 19:36:54.373325  469677 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1003 19:36:54.388029  469677 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1003 19:36:54.393493  469677 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 19:36:54.406615  469677 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 19:36:54.534391  469677 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 19:36:54.563833  469677 certs.go:69] Setting up /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397 for IP: 192.168.76.2
	I1003 19:36:54.563855  469677 certs.go:195] generating shared ca certs ...
	I1003 19:36:54.563872  469677 certs.go:227] acquiring lock for ca certs: {Name:mk5a10e6c921326e9c211447576eaeb893259ba7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:36:54.564060  469677 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21625-284583/.minikube/ca.key
	I1003 19:36:54.564138  469677 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.key
	I1003 19:36:54.564177  469677 certs.go:257] generating profile certs ...
	I1003 19:36:54.564260  469677 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/client.key
	I1003 19:36:54.564282  469677 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/client.crt with IP's: []
	I1003 19:36:52.084106  470831 addons.go:514] duration metric: took 11.741369469s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1003 19:36:52.092617  470831 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1003 19:36:52.094379  470831 api_server.go:141] control plane version: v1.28.0
	I1003 19:36:52.094453  470831 api_server.go:131] duration metric: took 19.211581ms to wait for apiserver health ...
	I1003 19:36:52.094475  470831 system_pods.go:43] waiting for kube-system pods to appear ...
	I1003 19:36:52.104999  470831 system_pods.go:59] 8 kube-system pods found
	I1003 19:36:52.105093  470831 system_pods.go:61] "coredns-5dd5756b68-6grkm" [678e0c98-f42a-4a69-8d50-a83a82886a69] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1003 19:36:52.105116  470831 system_pods.go:61] "etcd-old-k8s-version-174543" [8550f5a6-a2dc-4e9b-b623-9d0d9dfd66fd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1003 19:36:52.105151  470831 system_pods.go:61] "kindnet-rwdd6" [3cc7fea5-9441-4250-80b2-05aff82ce727] Running
	I1003 19:36:52.105178  470831 system_pods.go:61] "kube-apiserver-old-k8s-version-174543" [b8ce8574-fafd-4466-b9b8-b12c3ae221b7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1003 19:36:52.105201  470831 system_pods.go:61] "kube-controller-manager-old-k8s-version-174543" [aea29031-128c-4683-b165-ef6f11b79e72] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1003 19:36:52.105235  470831 system_pods.go:61] "kube-proxy-v4mqk" [50d549bb-e122-45af-8dad-b599f07053fd] Running
	I1003 19:36:52.105261  470831 system_pods.go:61] "kube-scheduler-old-k8s-version-174543" [3b73907b-8446-4189-9d96-e02a6c332aa6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1003 19:36:52.105279  470831 system_pods.go:61] "storage-provisioner" [8db23fd8-6872-4901-b61f-a88ac26407a7] Running
	I1003 19:36:52.105314  470831 system_pods.go:74] duration metric: took 10.804885ms to wait for pod list to return data ...
	I1003 19:36:52.105341  470831 default_sa.go:34] waiting for default service account to be created ...
	I1003 19:36:52.109408  470831 default_sa.go:45] found service account: "default"
	I1003 19:36:52.109473  470831 default_sa.go:55] duration metric: took 4.111364ms for default service account to be created ...
	I1003 19:36:52.109507  470831 system_pods.go:116] waiting for k8s-apps to be running ...
	I1003 19:36:52.113674  470831 system_pods.go:86] 8 kube-system pods found
	I1003 19:36:52.113760  470831 system_pods.go:89] "coredns-5dd5756b68-6grkm" [678e0c98-f42a-4a69-8d50-a83a82886a69] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1003 19:36:52.113785  470831 system_pods.go:89] "etcd-old-k8s-version-174543" [8550f5a6-a2dc-4e9b-b623-9d0d9dfd66fd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1003 19:36:52.113822  470831 system_pods.go:89] "kindnet-rwdd6" [3cc7fea5-9441-4250-80b2-05aff82ce727] Running
	I1003 19:36:52.113847  470831 system_pods.go:89] "kube-apiserver-old-k8s-version-174543" [b8ce8574-fafd-4466-b9b8-b12c3ae221b7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1003 19:36:52.113871  470831 system_pods.go:89] "kube-controller-manager-old-k8s-version-174543" [aea29031-128c-4683-b165-ef6f11b79e72] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1003 19:36:52.113906  470831 system_pods.go:89] "kube-proxy-v4mqk" [50d549bb-e122-45af-8dad-b599f07053fd] Running
	I1003 19:36:52.113933  470831 system_pods.go:89] "kube-scheduler-old-k8s-version-174543" [3b73907b-8446-4189-9d96-e02a6c332aa6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1003 19:36:52.113953  470831 system_pods.go:89] "storage-provisioner" [8db23fd8-6872-4901-b61f-a88ac26407a7] Running
	I1003 19:36:52.113990  470831 system_pods.go:126] duration metric: took 4.462457ms to wait for k8s-apps to be running ...
	I1003 19:36:52.114017  470831 system_svc.go:44] waiting for kubelet service to be running ....
	I1003 19:36:52.114104  470831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 19:36:52.129798  470831 system_svc.go:56] duration metric: took 15.772795ms WaitForService to wait for kubelet
	I1003 19:36:52.129872  470831 kubeadm.go:586] duration metric: took 11.787098529s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 19:36:52.129906  470831 node_conditions.go:102] verifying NodePressure condition ...
	I1003 19:36:52.133219  470831 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1003 19:36:52.133315  470831 node_conditions.go:123] node cpu capacity is 2
	I1003 19:36:52.133345  470831 node_conditions.go:105] duration metric: took 3.421679ms to run NodePressure ...
	I1003 19:36:52.133386  470831 start.go:241] waiting for startup goroutines ...
	I1003 19:36:52.133413  470831 start.go:246] waiting for cluster config update ...
	I1003 19:36:52.133439  470831 start.go:255] writing updated cluster config ...
	I1003 19:36:52.133757  470831 ssh_runner.go:195] Run: rm -f paused
	I1003 19:36:52.138185  470831 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1003 19:36:52.143212  470831 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-6grkm" in "kube-system" namespace to be "Ready" or be gone ...
	W1003 19:36:54.151250  470831 pod_ready.go:104] pod "coredns-5dd5756b68-6grkm" is not "Ready", error: <nil>
	I1003 19:36:54.723061  469677 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/client.crt ...
	I1003 19:36:54.723102  469677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/client.crt: {Name:mkea5bfb95d8fdb117792960e5221a8bc9115b50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:36:54.723346  469677 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/client.key ...
	I1003 19:36:54.723364  469677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/client.key: {Name:mkf4738ba9e553f9f9be1784d2e0f6c375d691df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:36:54.723521  469677 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/apiserver.key.ee2e84a9
	I1003 19:36:54.723538  469677 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/apiserver.crt.ee2e84a9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1003 19:36:55.207794  469677 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/apiserver.crt.ee2e84a9 ...
	I1003 19:36:55.207868  469677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/apiserver.crt.ee2e84a9: {Name:mk19ce55b7f476d867b58a46a648e11db58f5a77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:36:55.208085  469677 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/apiserver.key.ee2e84a9 ...
	I1003 19:36:55.208125  469677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/apiserver.key.ee2e84a9: {Name:mkc44185d4065ec27cc61b06ce0bc9de1613954b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:36:55.208247  469677 certs.go:382] copying /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/apiserver.crt.ee2e84a9 -> /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/apiserver.crt
	I1003 19:36:55.208353  469677 certs.go:386] copying /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/apiserver.key.ee2e84a9 -> /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/apiserver.key
	I1003 19:36:55.208436  469677 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/proxy-client.key
	I1003 19:36:55.208469  469677 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/proxy-client.crt with IP's: []
	I1003 19:36:56.304461  469677 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/proxy-client.crt ...
	I1003 19:36:56.304494  469677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/proxy-client.crt: {Name:mkb08c6c1be2a70b1e5ff3f6ddde2e4e9c47ee6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:36:56.304684  469677 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/proxy-client.key ...
	I1003 19:36:56.304701  469677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/proxy-client.key: {Name:mk1a2d478a1729a17beec4d720ca7883e92f1491 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:36:56.304906  469677 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/286434.pem (1338 bytes)
	W1003 19:36:56.304950  469677 certs.go:480] ignoring /home/jenkins/minikube-integration/21625-284583/.minikube/certs/286434_empty.pem, impossibly tiny 0 bytes
	I1003 19:36:56.304965  469677 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 19:36:56.304990  469677 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem (1082 bytes)
	I1003 19:36:56.305016  469677 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem (1123 bytes)
	I1003 19:36:56.305042  469677 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/key.pem (1675 bytes)
	I1003 19:36:56.305090  469677 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem (1708 bytes)
	I1003 19:36:56.305635  469677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 19:36:56.325874  469677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1003 19:36:56.344837  469677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 19:36:56.363293  469677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1003 19:36:56.381085  469677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1003 19:36:56.400919  469677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1003 19:36:56.419228  469677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 19:36:56.438028  469677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1003 19:36:56.455936  469677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem --> /usr/share/ca-certificates/2864342.pem (1708 bytes)
	I1003 19:36:56.474212  469677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 19:36:56.491955  469677 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/certs/286434.pem --> /usr/share/ca-certificates/286434.pem (1338 bytes)
	I1003 19:36:56.510065  469677 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 19:36:56.524259  469677 ssh_runner.go:195] Run: openssl version
	I1003 19:36:56.534016  469677 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/286434.pem && ln -fs /usr/share/ca-certificates/286434.pem /etc/ssl/certs/286434.pem"
	I1003 19:36:56.543214  469677 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/286434.pem
	I1003 19:36:56.547972  469677 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  3 18:34 /usr/share/ca-certificates/286434.pem
	I1003 19:36:56.548066  469677 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/286434.pem
	I1003 19:36:56.591319  469677 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/286434.pem /etc/ssl/certs/51391683.0"
	I1003 19:36:56.600012  469677 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2864342.pem && ln -fs /usr/share/ca-certificates/2864342.pem /etc/ssl/certs/2864342.pem"
	I1003 19:36:56.608753  469677 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2864342.pem
	I1003 19:36:56.612596  469677 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  3 18:34 /usr/share/ca-certificates/2864342.pem
	I1003 19:36:56.612712  469677 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2864342.pem
	I1003 19:36:56.654061  469677 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2864342.pem /etc/ssl/certs/3ec20f2e.0"
	I1003 19:36:56.662615  469677 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 19:36:56.672208  469677 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 19:36:56.676572  469677 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  3 18:27 /usr/share/ca-certificates/minikubeCA.pem
	I1003 19:36:56.676683  469677 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 19:36:56.717711  469677 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 19:36:56.729797  469677 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 19:36:56.737585  469677 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1003 19:36:56.737637  469677 kubeadm.go:400] StartCluster: {Name:no-preload-643397 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-643397 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 19:36:56.737710  469677 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1003 19:36:56.737768  469677 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1003 19:36:56.780132  469677 cri.go:89] found id: ""
	I1003 19:36:56.780210  469677 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 19:36:56.789811  469677 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1003 19:36:56.797624  469677 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1003 19:36:56.797736  469677 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 19:36:56.805674  469677 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1003 19:36:56.805698  469677 kubeadm.go:157] found existing configuration files:
	
	I1003 19:36:56.805776  469677 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1003 19:36:56.814539  469677 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1003 19:36:56.814648  469677 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1003 19:36:56.822346  469677 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1003 19:36:56.829610  469677 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1003 19:36:56.829675  469677 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1003 19:36:56.836933  469677 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1003 19:36:56.852916  469677 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1003 19:36:56.852987  469677 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1003 19:36:56.863551  469677 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1003 19:36:56.873992  469677 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1003 19:36:56.874054  469677 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1003 19:36:56.882629  469677 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1003 19:36:56.923304  469677 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1003 19:36:56.923637  469677 kubeadm.go:318] [preflight] Running pre-flight checks
	I1003 19:36:56.956544  469677 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1003 19:36:56.956622  469677 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1003 19:36:56.956664  469677 kubeadm.go:318] OS: Linux
	I1003 19:36:56.956718  469677 kubeadm.go:318] CGROUPS_CPU: enabled
	I1003 19:36:56.956801  469677 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1003 19:36:56.956857  469677 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1003 19:36:56.956912  469677 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1003 19:36:56.956970  469677 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1003 19:36:56.957025  469677 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1003 19:36:56.957075  469677 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1003 19:36:56.957129  469677 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1003 19:36:56.957182  469677 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1003 19:36:57.030788  469677 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1003 19:36:57.030916  469677 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1003 19:36:57.031019  469677 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1003 19:36:57.050939  469677 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1003 19:36:57.055510  469677 out.go:252]   - Generating certificates and keys ...
	I1003 19:36:57.055689  469677 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1003 19:36:57.055808  469677 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1003 19:36:57.836445  469677 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1003 19:36:57.912322  469677 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1003 19:36:58.196922  469677 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1003 19:36:58.587327  469677 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1003 19:36:58.751249  469677 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1003 19:36:58.751615  469677 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-643397] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1003 19:36:58.838899  469677 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1003 19:36:58.839218  469677 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-643397] and IPs [192.168.76.2 127.0.0.1 ::1]
	W1003 19:36:56.152283  470831 pod_ready.go:104] pod "coredns-5dd5756b68-6grkm" is not "Ready", error: <nil>
	W1003 19:36:58.650953  470831 pod_ready.go:104] pod "coredns-5dd5756b68-6grkm" is not "Ready", error: <nil>
	I1003 19:36:59.776416  469677 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1003 19:37:00.060836  469677 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1003 19:37:00.317856  469677 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1003 19:37:00.318288  469677 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1003 19:37:00.476997  469677 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1003 19:37:00.676428  469677 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1003 19:37:00.863403  469677 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1003 19:37:01.550407  469677 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1003 19:37:02.648554  469677 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1003 19:37:02.648666  469677 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1003 19:37:02.648780  469677 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1003 19:37:02.652441  469677 out.go:252]   - Booting up control plane ...
	I1003 19:37:02.652564  469677 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1003 19:37:02.652647  469677 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1003 19:37:02.652719  469677 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1003 19:37:02.670695  469677 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1003 19:37:02.670820  469677 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1003 19:37:02.682650  469677 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1003 19:37:02.682776  469677 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1003 19:37:02.682820  469677 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1003 19:37:02.856554  469677 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1003 19:37:02.856720  469677 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1003 19:37:03.858878  469677 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.002481179s
	I1003 19:37:03.862941  469677 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1003 19:37:03.863050  469677 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1003 19:37:03.863150  469677 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1003 19:37:03.863894  469677 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1003 19:37:00.658147  470831 pod_ready.go:104] pod "coredns-5dd5756b68-6grkm" is not "Ready", error: <nil>
	W1003 19:37:03.151308  470831 pod_ready.go:104] pod "coredns-5dd5756b68-6grkm" is not "Ready", error: <nil>
	I1003 19:37:08.071258  469677 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 4.207141162s
	W1003 19:37:05.651702  470831 pod_ready.go:104] pod "coredns-5dd5756b68-6grkm" is not "Ready", error: <nil>
	W1003 19:37:07.652884  470831 pod_ready.go:104] pod "coredns-5dd5756b68-6grkm" is not "Ready", error: <nil>
	W1003 19:37:09.653756  470831 pod_ready.go:104] pod "coredns-5dd5756b68-6grkm" is not "Ready", error: <nil>
	I1003 19:37:10.649991  469677 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 6.781485326s
	I1003 19:37:12.866223  469677 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 9.002847252s
	I1003 19:37:12.888325  469677 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1003 19:37:12.909020  469677 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1003 19:37:12.954407  469677 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1003 19:37:12.954615  469677 kubeadm.go:318] [mark-control-plane] Marking the node no-preload-643397 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1003 19:37:12.978776  469677 kubeadm.go:318] [bootstrap-token] Using token: dz2q20.oxlpcyn3z86knmhs
	I1003 19:37:12.981972  469677 out.go:252]   - Configuring RBAC rules ...
	I1003 19:37:12.982125  469677 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1003 19:37:13.013673  469677 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1003 19:37:13.047764  469677 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1003 19:37:13.065884  469677 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1003 19:37:13.070997  469677 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1003 19:37:13.076272  469677 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1003 19:37:13.273866  469677 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1003 19:37:13.818579  469677 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1003 19:37:14.284423  469677 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1003 19:37:14.285888  469677 kubeadm.go:318] 
	I1003 19:37:14.285967  469677 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1003 19:37:14.285973  469677 kubeadm.go:318] 
	I1003 19:37:14.286054  469677 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1003 19:37:14.286060  469677 kubeadm.go:318] 
	I1003 19:37:14.286087  469677 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1003 19:37:14.286473  469677 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1003 19:37:14.286531  469677 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1003 19:37:14.286537  469677 kubeadm.go:318] 
	I1003 19:37:14.286593  469677 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1003 19:37:14.286598  469677 kubeadm.go:318] 
	I1003 19:37:14.286651  469677 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1003 19:37:14.286656  469677 kubeadm.go:318] 
	I1003 19:37:14.286711  469677 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1003 19:37:14.286789  469677 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1003 19:37:14.286872  469677 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1003 19:37:14.286883  469677 kubeadm.go:318] 
	I1003 19:37:14.287175  469677 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1003 19:37:14.287279  469677 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1003 19:37:14.287285  469677 kubeadm.go:318] 
	I1003 19:37:14.287544  469677 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token dz2q20.oxlpcyn3z86knmhs \
	I1003 19:37:14.287665  469677 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:f66ff31263aa4cda6b17caa2076838d6a1918275f1c2773b90b119c0d4a4d71a \
	I1003 19:37:14.287847  469677 kubeadm.go:318] 	--control-plane 
	I1003 19:37:14.287875  469677 kubeadm.go:318] 
	I1003 19:37:14.288110  469677 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1003 19:37:14.288128  469677 kubeadm.go:318] 
	I1003 19:37:14.288393  469677 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token dz2q20.oxlpcyn3z86knmhs \
	I1003 19:37:14.288650  469677 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:f66ff31263aa4cda6b17caa2076838d6a1918275f1c2773b90b119c0d4a4d71a 
	I1003 19:37:14.293244  469677 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1003 19:37:14.293485  469677 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1003 19:37:14.293601  469677 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1003 19:37:14.293622  469677 cni.go:84] Creating CNI manager for ""
	I1003 19:37:14.293634  469677 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 19:37:14.299735  469677 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1003 19:37:14.303086  469677 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1003 19:37:14.309906  469677 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1003 19:37:14.309930  469677 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1003 19:37:14.336322  469677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	W1003 19:37:11.655175  470831 pod_ready.go:104] pod "coredns-5dd5756b68-6grkm" is not "Ready", error: <nil>
	W1003 19:37:13.657155  470831 pod_ready.go:104] pod "coredns-5dd5756b68-6grkm" is not "Ready", error: <nil>
	I1003 19:37:14.811333  469677 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1003 19:37:14.811471  469677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 19:37:14.811560  469677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-643397 minikube.k8s.io/updated_at=2025_10_03T19_37_14_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=a43873c79fc22f8b1ccd29d3dfa635d392b09335 minikube.k8s.io/name=no-preload-643397 minikube.k8s.io/primary=true
	I1003 19:37:15.177419  469677 ops.go:34] apiserver oom_adj: -16
	I1003 19:37:15.177535  469677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 19:37:15.678053  469677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 19:37:16.177675  469677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 19:37:16.678465  469677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 19:37:17.177605  469677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 19:37:17.678441  469677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 19:37:18.177833  469677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 19:37:18.678473  469677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 19:37:19.177998  469677 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 19:37:19.303395  469677 kubeadm.go:1113] duration metric: took 4.491974475s to wait for elevateKubeSystemPrivileges
	I1003 19:37:19.303422  469677 kubeadm.go:402] duration metric: took 22.565789399s to StartCluster
	I1003 19:37:19.303440  469677 settings.go:142] acquiring lock: {Name:mkc95577dbc448e3409dfa2b5e53a3a1327cb451 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:37:19.303498  469677 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21625-284583/kubeconfig
	I1003 19:37:19.304437  469677 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/kubeconfig: {Name:mkc1323fd87f4a78231a26d2dab0dff7feecf1e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:37:19.304655  469677 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1003 19:37:19.304785  469677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1003 19:37:19.305028  469677 config.go:182] Loaded profile config "no-preload-643397": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 19:37:19.305059  469677 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1003 19:37:19.305117  469677 addons.go:69] Setting storage-provisioner=true in profile "no-preload-643397"
	I1003 19:37:19.305134  469677 addons.go:238] Setting addon storage-provisioner=true in "no-preload-643397"
	I1003 19:37:19.305155  469677 host.go:66] Checking if "no-preload-643397" exists ...
	I1003 19:37:19.305706  469677 addons.go:69] Setting default-storageclass=true in profile "no-preload-643397"
	I1003 19:37:19.305744  469677 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-643397"
	I1003 19:37:19.306024  469677 cli_runner.go:164] Run: docker container inspect no-preload-643397 --format={{.State.Status}}
	I1003 19:37:19.306036  469677 cli_runner.go:164] Run: docker container inspect no-preload-643397 --format={{.State.Status}}
	I1003 19:37:19.309052  469677 out.go:179] * Verifying Kubernetes components...
	I1003 19:37:19.315256  469677 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 19:37:19.344959  469677 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 19:37:19.350292  469677 addons.go:238] Setting addon default-storageclass=true in "no-preload-643397"
	I1003 19:37:19.350335  469677 host.go:66] Checking if "no-preload-643397" exists ...
	I1003 19:37:19.350745  469677 cli_runner.go:164] Run: docker container inspect no-preload-643397 --format={{.State.Status}}
	I1003 19:37:19.350945  469677 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 19:37:19.350970  469677 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1003 19:37:19.351010  469677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-643397
	I1003 19:37:19.400750  469677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/no-preload-643397/id_rsa Username:docker}
	I1003 19:37:19.407421  469677 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1003 19:37:19.407447  469677 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1003 19:37:19.407509  469677 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-643397
	I1003 19:37:19.433989  469677 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33423 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/no-preload-643397/id_rsa Username:docker}
	W1003 19:37:16.149238  470831 pod_ready.go:104] pod "coredns-5dd5756b68-6grkm" is not "Ready", error: <nil>
	W1003 19:37:18.649271  470831 pod_ready.go:104] pod "coredns-5dd5756b68-6grkm" is not "Ready", error: <nil>
	I1003 19:37:19.715486  469677 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1003 19:37:19.715593  469677 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 19:37:19.772102  469677 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1003 19:37:19.820338  469677 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 19:37:20.371803  469677 node_ready.go:35] waiting up to 6m0s for node "no-preload-643397" to be "Ready" ...
	I1003 19:37:20.371912  469677 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1003 19:37:20.880944  469677 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-643397" context rescaled to 1 replicas
	I1003 19:37:20.986839  469677 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.166463205s)
	I1003 19:37:20.990124  469677 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1003 19:37:20.993057  469677 addons.go:514] duration metric: took 1.687963193s for enable addons: enabled=[default-storageclass storage-provisioner]
	W1003 19:37:22.376123  469677 node_ready.go:57] node "no-preload-643397" has "Ready":"False" status (will retry)
	W1003 19:37:20.649460  470831 pod_ready.go:104] pod "coredns-5dd5756b68-6grkm" is not "Ready", error: <nil>
	W1003 19:37:22.650326  470831 pod_ready.go:104] pod "coredns-5dd5756b68-6grkm" is not "Ready", error: <nil>
	W1003 19:37:25.150069  470831 pod_ready.go:104] pod "coredns-5dd5756b68-6grkm" is not "Ready", error: <nil>
	W1003 19:37:24.875623  469677 node_ready.go:57] node "no-preload-643397" has "Ready":"False" status (will retry)
	W1003 19:37:26.875771  469677 node_ready.go:57] node "no-preload-643397" has "Ready":"False" status (will retry)
	W1003 19:37:29.375746  469677 node_ready.go:57] node "no-preload-643397" has "Ready":"False" status (will retry)
	W1003 19:37:27.150205  470831 pod_ready.go:104] pod "coredns-5dd5756b68-6grkm" is not "Ready", error: <nil>
	I1003 19:37:28.649438  470831 pod_ready.go:94] pod "coredns-5dd5756b68-6grkm" is "Ready"
	I1003 19:37:28.649469  470831 pod_ready.go:86] duration metric: took 36.506186575s for pod "coredns-5dd5756b68-6grkm" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:37:28.652598  470831 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-174543" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:37:28.658917  470831 pod_ready.go:94] pod "etcd-old-k8s-version-174543" is "Ready"
	I1003 19:37:28.658946  470831 pod_ready.go:86] duration metric: took 6.321554ms for pod "etcd-old-k8s-version-174543" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:37:28.662163  470831 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-174543" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:37:28.668091  470831 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-174543" is "Ready"
	I1003 19:37:28.668117  470831 pod_ready.go:86] duration metric: took 5.928958ms for pod "kube-apiserver-old-k8s-version-174543" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:37:28.671688  470831 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-174543" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:37:28.846760  470831 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-174543" is "Ready"
	I1003 19:37:28.846792  470831 pod_ready.go:86] duration metric: took 175.076433ms for pod "kube-controller-manager-old-k8s-version-174543" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:37:29.047756  470831 pod_ready.go:83] waiting for pod "kube-proxy-v4mqk" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:37:29.448122  470831 pod_ready.go:94] pod "kube-proxy-v4mqk" is "Ready"
	I1003 19:37:29.448147  470831 pod_ready.go:86] duration metric: took 400.307649ms for pod "kube-proxy-v4mqk" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:37:29.647912  470831 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-174543" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:37:30.050088  470831 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-174543" is "Ready"
	I1003 19:37:30.050180  470831 pod_ready.go:86] duration metric: took 402.239657ms for pod "kube-scheduler-old-k8s-version-174543" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:37:30.050210  470831 pod_ready.go:40] duration metric: took 37.911945126s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1003 19:37:30.129993  470831 start.go:623] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1003 19:37:30.133282  470831 out.go:203] 
	W1003 19:37:30.136402  470831 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1003 19:37:30.139579  470831 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1003 19:37:30.142604  470831 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-174543" cluster and "default" namespace by default
	W1003 19:37:31.376152  469677 node_ready.go:57] node "no-preload-643397" has "Ready":"False" status (will retry)
	I1003 19:37:33.877493  469677 node_ready.go:49] node "no-preload-643397" is "Ready"
	I1003 19:37:33.877520  469677 node_ready.go:38] duration metric: took 13.504811463s for node "no-preload-643397" to be "Ready" ...
	I1003 19:37:33.877534  469677 api_server.go:52] waiting for apiserver process to appear ...
	I1003 19:37:33.877594  469677 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 19:37:33.895506  469677 api_server.go:72] duration metric: took 14.590822912s to wait for apiserver process to appear ...
	I1003 19:37:33.895531  469677 api_server.go:88] waiting for apiserver healthz status ...
	I1003 19:37:33.895550  469677 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1003 19:37:33.909806  469677 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1003 19:37:33.910971  469677 api_server.go:141] control plane version: v1.34.1
	I1003 19:37:33.911000  469677 api_server.go:131] duration metric: took 15.46149ms to wait for apiserver health ...
	I1003 19:37:33.911010  469677 system_pods.go:43] waiting for kube-system pods to appear ...
	I1003 19:37:33.916639  469677 system_pods.go:59] 8 kube-system pods found
	I1003 19:37:33.916673  469677 system_pods.go:61] "coredns-66bc5c9577-h8n5p" [d7f4ec9d-9c68-4332-b6c7-e52f424dcd1e] Pending
	I1003 19:37:33.916680  469677 system_pods.go:61] "etcd-no-preload-643397" [642f5548-1caf-4bb4-9780-63e00e8b0a3c] Running
	I1003 19:37:33.916685  469677 system_pods.go:61] "kindnet-7zwct" [bd0ecfeb-3764-425f-b7ae-e6f5b3e161d8] Running
	I1003 19:37:33.916689  469677 system_pods.go:61] "kube-apiserver-no-preload-643397" [6e4aa6fd-218d-45ce-a0d9-a1736936d2d3] Running
	I1003 19:37:33.916694  469677 system_pods.go:61] "kube-controller-manager-no-preload-643397" [29843b74-a1d2-46af-ac5e-06f4d53a0ac4] Running
	I1003 19:37:33.916698  469677 system_pods.go:61] "kube-proxy-lcs2q" [f25c0891-1202-477f-9cc9-5e41c3f1b9fb] Running
	I1003 19:37:33.916702  469677 system_pods.go:61] "kube-scheduler-no-preload-643397" [6865d4a0-3590-465e-81e1-927d271170c0] Running
	I1003 19:37:33.916710  469677 system_pods.go:61] "storage-provisioner" [355c16e4-3158-4ffc-9379-57747ed71cca] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1003 19:37:33.916717  469677 system_pods.go:74] duration metric: took 5.701435ms to wait for pod list to return data ...
	I1003 19:37:33.916791  469677 default_sa.go:34] waiting for default service account to be created ...
	I1003 19:37:33.929062  469677 default_sa.go:45] found service account: "default"
	I1003 19:37:33.929096  469677 default_sa.go:55] duration metric: took 12.295124ms for default service account to be created ...
	I1003 19:37:33.929107  469677 system_pods.go:116] waiting for k8s-apps to be running ...
	I1003 19:37:33.935443  469677 system_pods.go:86] 8 kube-system pods found
	I1003 19:37:33.935482  469677 system_pods.go:89] "coredns-66bc5c9577-h8n5p" [d7f4ec9d-9c68-4332-b6c7-e52f424dcd1e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1003 19:37:33.935488  469677 system_pods.go:89] "etcd-no-preload-643397" [642f5548-1caf-4bb4-9780-63e00e8b0a3c] Running
	I1003 19:37:33.935536  469677 system_pods.go:89] "kindnet-7zwct" [bd0ecfeb-3764-425f-b7ae-e6f5b3e161d8] Running
	I1003 19:37:33.935550  469677 system_pods.go:89] "kube-apiserver-no-preload-643397" [6e4aa6fd-218d-45ce-a0d9-a1736936d2d3] Running
	I1003 19:37:33.935556  469677 system_pods.go:89] "kube-controller-manager-no-preload-643397" [29843b74-a1d2-46af-ac5e-06f4d53a0ac4] Running
	I1003 19:37:33.935561  469677 system_pods.go:89] "kube-proxy-lcs2q" [f25c0891-1202-477f-9cc9-5e41c3f1b9fb] Running
	I1003 19:37:33.935566  469677 system_pods.go:89] "kube-scheduler-no-preload-643397" [6865d4a0-3590-465e-81e1-927d271170c0] Running
	I1003 19:37:33.935579  469677 system_pods.go:89] "storage-provisioner" [355c16e4-3158-4ffc-9379-57747ed71cca] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1003 19:37:33.935626  469677 retry.go:31] will retry after 295.140191ms: missing components: kube-dns
	I1003 19:37:34.235258  469677 system_pods.go:86] 8 kube-system pods found
	I1003 19:37:34.235294  469677 system_pods.go:89] "coredns-66bc5c9577-h8n5p" [d7f4ec9d-9c68-4332-b6c7-e52f424dcd1e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1003 19:37:34.235302  469677 system_pods.go:89] "etcd-no-preload-643397" [642f5548-1caf-4bb4-9780-63e00e8b0a3c] Running
	I1003 19:37:34.235309  469677 system_pods.go:89] "kindnet-7zwct" [bd0ecfeb-3764-425f-b7ae-e6f5b3e161d8] Running
	I1003 19:37:34.235339  469677 system_pods.go:89] "kube-apiserver-no-preload-643397" [6e4aa6fd-218d-45ce-a0d9-a1736936d2d3] Running
	I1003 19:37:34.235353  469677 system_pods.go:89] "kube-controller-manager-no-preload-643397" [29843b74-a1d2-46af-ac5e-06f4d53a0ac4] Running
	I1003 19:37:34.235358  469677 system_pods.go:89] "kube-proxy-lcs2q" [f25c0891-1202-477f-9cc9-5e41c3f1b9fb] Running
	I1003 19:37:34.235362  469677 system_pods.go:89] "kube-scheduler-no-preload-643397" [6865d4a0-3590-465e-81e1-927d271170c0] Running
	I1003 19:37:34.235368  469677 system_pods.go:89] "storage-provisioner" [355c16e4-3158-4ffc-9379-57747ed71cca] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1003 19:37:34.235401  469677 retry.go:31] will retry after 248.460437ms: missing components: kube-dns
	I1003 19:37:34.489309  469677 system_pods.go:86] 8 kube-system pods found
	I1003 19:37:34.489347  469677 system_pods.go:89] "coredns-66bc5c9577-h8n5p" [d7f4ec9d-9c68-4332-b6c7-e52f424dcd1e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1003 19:37:34.489354  469677 system_pods.go:89] "etcd-no-preload-643397" [642f5548-1caf-4bb4-9780-63e00e8b0a3c] Running
	I1003 19:37:34.489361  469677 system_pods.go:89] "kindnet-7zwct" [bd0ecfeb-3764-425f-b7ae-e6f5b3e161d8] Running
	I1003 19:37:34.489385  469677 system_pods.go:89] "kube-apiserver-no-preload-643397" [6e4aa6fd-218d-45ce-a0d9-a1736936d2d3] Running
	I1003 19:37:34.489390  469677 system_pods.go:89] "kube-controller-manager-no-preload-643397" [29843b74-a1d2-46af-ac5e-06f4d53a0ac4] Running
	I1003 19:37:34.489395  469677 system_pods.go:89] "kube-proxy-lcs2q" [f25c0891-1202-477f-9cc9-5e41c3f1b9fb] Running
	I1003 19:37:34.489404  469677 system_pods.go:89] "kube-scheduler-no-preload-643397" [6865d4a0-3590-465e-81e1-927d271170c0] Running
	I1003 19:37:34.489412  469677 system_pods.go:89] "storage-provisioner" [355c16e4-3158-4ffc-9379-57747ed71cca] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1003 19:37:34.489427  469677 retry.go:31] will retry after 349.773107ms: missing components: kube-dns
	I1003 19:37:34.842556  469677 system_pods.go:86] 8 kube-system pods found
	I1003 19:37:34.842590  469677 system_pods.go:89] "coredns-66bc5c9577-h8n5p" [d7f4ec9d-9c68-4332-b6c7-e52f424dcd1e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1003 19:37:34.842597  469677 system_pods.go:89] "etcd-no-preload-643397" [642f5548-1caf-4bb4-9780-63e00e8b0a3c] Running
	I1003 19:37:34.842604  469677 system_pods.go:89] "kindnet-7zwct" [bd0ecfeb-3764-425f-b7ae-e6f5b3e161d8] Running
	I1003 19:37:34.842609  469677 system_pods.go:89] "kube-apiserver-no-preload-643397" [6e4aa6fd-218d-45ce-a0d9-a1736936d2d3] Running
	I1003 19:37:34.842617  469677 system_pods.go:89] "kube-controller-manager-no-preload-643397" [29843b74-a1d2-46af-ac5e-06f4d53a0ac4] Running
	I1003 19:37:34.842621  469677 system_pods.go:89] "kube-proxy-lcs2q" [f25c0891-1202-477f-9cc9-5e41c3f1b9fb] Running
	I1003 19:37:34.842632  469677 system_pods.go:89] "kube-scheduler-no-preload-643397" [6865d4a0-3590-465e-81e1-927d271170c0] Running
	I1003 19:37:34.842638  469677 system_pods.go:89] "storage-provisioner" [355c16e4-3158-4ffc-9379-57747ed71cca] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1003 19:37:34.842653  469677 retry.go:31] will retry after 478.014809ms: missing components: kube-dns
	I1003 19:37:35.324852  469677 system_pods.go:86] 8 kube-system pods found
	I1003 19:37:35.324885  469677 system_pods.go:89] "coredns-66bc5c9577-h8n5p" [d7f4ec9d-9c68-4332-b6c7-e52f424dcd1e] Running
	I1003 19:37:35.324892  469677 system_pods.go:89] "etcd-no-preload-643397" [642f5548-1caf-4bb4-9780-63e00e8b0a3c] Running
	I1003 19:37:35.324897  469677 system_pods.go:89] "kindnet-7zwct" [bd0ecfeb-3764-425f-b7ae-e6f5b3e161d8] Running
	I1003 19:37:35.324905  469677 system_pods.go:89] "kube-apiserver-no-preload-643397" [6e4aa6fd-218d-45ce-a0d9-a1736936d2d3] Running
	I1003 19:37:35.324940  469677 system_pods.go:89] "kube-controller-manager-no-preload-643397" [29843b74-a1d2-46af-ac5e-06f4d53a0ac4] Running
	I1003 19:37:35.324953  469677 system_pods.go:89] "kube-proxy-lcs2q" [f25c0891-1202-477f-9cc9-5e41c3f1b9fb] Running
	I1003 19:37:35.324958  469677 system_pods.go:89] "kube-scheduler-no-preload-643397" [6865d4a0-3590-465e-81e1-927d271170c0] Running
	I1003 19:37:35.324962  469677 system_pods.go:89] "storage-provisioner" [355c16e4-3158-4ffc-9379-57747ed71cca] Running
	I1003 19:37:35.324969  469677 system_pods.go:126] duration metric: took 1.395856253s to wait for k8s-apps to be running ...
	I1003 19:37:35.324982  469677 system_svc.go:44] waiting for kubelet service to be running ....
	I1003 19:37:35.325049  469677 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 19:37:35.338955  469677 system_svc.go:56] duration metric: took 13.963268ms WaitForService to wait for kubelet
	I1003 19:37:35.339034  469677 kubeadm.go:586] duration metric: took 16.034355182s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 19:37:35.339070  469677 node_conditions.go:102] verifying NodePressure condition ...
	I1003 19:37:35.342074  469677 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1003 19:37:35.342109  469677 node_conditions.go:123] node cpu capacity is 2
	I1003 19:37:35.342126  469677 node_conditions.go:105] duration metric: took 3.043245ms to run NodePressure ...
	I1003 19:37:35.342138  469677 start.go:241] waiting for startup goroutines ...
	I1003 19:37:35.342146  469677 start.go:246] waiting for cluster config update ...
	I1003 19:37:35.342158  469677 start.go:255] writing updated cluster config ...
	I1003 19:37:35.342457  469677 ssh_runner.go:195] Run: rm -f paused
	I1003 19:37:35.346951  469677 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1003 19:37:35.350667  469677 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-h8n5p" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:37:35.355997  469677 pod_ready.go:94] pod "coredns-66bc5c9577-h8n5p" is "Ready"
	I1003 19:37:35.356030  469677 pod_ready.go:86] duration metric: took 5.334275ms for pod "coredns-66bc5c9577-h8n5p" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:37:35.358383  469677 pod_ready.go:83] waiting for pod "etcd-no-preload-643397" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:37:35.363206  469677 pod_ready.go:94] pod "etcd-no-preload-643397" is "Ready"
	I1003 19:37:35.363231  469677 pod_ready.go:86] duration metric: took 4.821224ms for pod "etcd-no-preload-643397" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:37:35.366173  469677 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-643397" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:37:35.370975  469677 pod_ready.go:94] pod "kube-apiserver-no-preload-643397" is "Ready"
	I1003 19:37:35.371012  469677 pod_ready.go:86] duration metric: took 4.811206ms for pod "kube-apiserver-no-preload-643397" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:37:35.375547  469677 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-643397" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:37:35.751762  469677 pod_ready.go:94] pod "kube-controller-manager-no-preload-643397" is "Ready"
	I1003 19:37:35.751787  469677 pod_ready.go:86] duration metric: took 376.212677ms for pod "kube-controller-manager-no-preload-643397" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:37:35.951184  469677 pod_ready.go:83] waiting for pod "kube-proxy-lcs2q" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:37:36.350602  469677 pod_ready.go:94] pod "kube-proxy-lcs2q" is "Ready"
	I1003 19:37:36.350635  469677 pod_ready.go:86] duration metric: took 399.421484ms for pod "kube-proxy-lcs2q" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:37:36.550913  469677 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-643397" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:37:36.951534  469677 pod_ready.go:94] pod "kube-scheduler-no-preload-643397" is "Ready"
	I1003 19:37:36.951574  469677 pod_ready.go:86] duration metric: took 400.633013ms for pod "kube-scheduler-no-preload-643397" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:37:36.951587  469677 pod_ready.go:40] duration metric: took 1.604603534s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1003 19:37:37.024926  469677 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1003 19:37:37.028838  469677 out.go:179] * Done! kubectl is now configured to use "no-preload-643397" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 03 19:37:34 no-preload-643397 crio[837]: time="2025-10-03T19:37:34.295466216Z" level=info msg="Created container 38792b09c36c6f720dcb4a60b61b1fc69f203ccd6c4400eadc781cf5e9096ed2: kube-system/coredns-66bc5c9577-h8n5p/coredns" id=1750e59d-ba0f-474c-b1ab-72fb0b8b8f96 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:37:34 no-preload-643397 crio[837]: time="2025-10-03T19:37:34.296910274Z" level=info msg="Starting container: 38792b09c36c6f720dcb4a60b61b1fc69f203ccd6c4400eadc781cf5e9096ed2" id=c1069b9d-4a56-4b6f-bbed-e87a05f6f0b3 name=/runtime.v1.RuntimeService/StartContainer
	Oct 03 19:37:34 no-preload-643397 crio[837]: time="2025-10-03T19:37:34.301566819Z" level=info msg="Started container" PID=2478 containerID=38792b09c36c6f720dcb4a60b61b1fc69f203ccd6c4400eadc781cf5e9096ed2 description=kube-system/coredns-66bc5c9577-h8n5p/coredns id=c1069b9d-4a56-4b6f-bbed-e87a05f6f0b3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=28d4703b83d2106fb23318222d8a5db1c5e1f2006edaf45e9ef1aef5abea39f3
	Oct 03 19:37:37 no-preload-643397 crio[837]: time="2025-10-03T19:37:37.547557015Z" level=info msg="Running pod sandbox: default/busybox/POD" id=4c6b3bb5-8bef-4f67-99ab-e2d3d27d32ed name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 03 19:37:37 no-preload-643397 crio[837]: time="2025-10-03T19:37:37.547632446Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 19:37:37 no-preload-643397 crio[837]: time="2025-10-03T19:37:37.553073052Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:e622cf2d5fa70bfdc9a7caa8796fbecfe482725e80adb9d891d6e619f72ca74b UID:94854f52-744f-499a-b87d-fc57eb32aae8 NetNS:/var/run/netns/261df386-744d-44c9-b751-f6585065aab5 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001512098}] Aliases:map[]}"
	Oct 03 19:37:37 no-preload-643397 crio[837]: time="2025-10-03T19:37:37.553109188Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 03 19:37:37 no-preload-643397 crio[837]: time="2025-10-03T19:37:37.573495009Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:e622cf2d5fa70bfdc9a7caa8796fbecfe482725e80adb9d891d6e619f72ca74b UID:94854f52-744f-499a-b87d-fc57eb32aae8 NetNS:/var/run/netns/261df386-744d-44c9-b751-f6585065aab5 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001512098}] Aliases:map[]}"
	Oct 03 19:37:37 no-preload-643397 crio[837]: time="2025-10-03T19:37:37.573825837Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 03 19:37:37 no-preload-643397 crio[837]: time="2025-10-03T19:37:37.59044951Z" level=info msg="Ran pod sandbox e622cf2d5fa70bfdc9a7caa8796fbecfe482725e80adb9d891d6e619f72ca74b with infra container: default/busybox/POD" id=4c6b3bb5-8bef-4f67-99ab-e2d3d27d32ed name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 03 19:37:37 no-preload-643397 crio[837]: time="2025-10-03T19:37:37.591689791Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=65a0f4c1-0330-43de-b22f-2573bc74df4a name=/runtime.v1.ImageService/ImageStatus
	Oct 03 19:37:37 no-preload-643397 crio[837]: time="2025-10-03T19:37:37.591830331Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=65a0f4c1-0330-43de-b22f-2573bc74df4a name=/runtime.v1.ImageService/ImageStatus
	Oct 03 19:37:37 no-preload-643397 crio[837]: time="2025-10-03T19:37:37.591872572Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=65a0f4c1-0330-43de-b22f-2573bc74df4a name=/runtime.v1.ImageService/ImageStatus
	Oct 03 19:37:37 no-preload-643397 crio[837]: time="2025-10-03T19:37:37.592649264Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0e8d31e4-0a09-4ec4-b2ee-a67dc4860358 name=/runtime.v1.ImageService/PullImage
	Oct 03 19:37:37 no-preload-643397 crio[837]: time="2025-10-03T19:37:37.594876547Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 03 19:37:39 no-preload-643397 crio[837]: time="2025-10-03T19:37:39.617809762Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=0e8d31e4-0a09-4ec4-b2ee-a67dc4860358 name=/runtime.v1.ImageService/PullImage
	Oct 03 19:37:39 no-preload-643397 crio[837]: time="2025-10-03T19:37:39.618483675Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=17147d36-9140-426e-abb6-7e54b64dfd56 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 19:37:39 no-preload-643397 crio[837]: time="2025-10-03T19:37:39.62027896Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=227090b0-108e-4e3c-abe7-d5e5c425f82f name=/runtime.v1.ImageService/ImageStatus
	Oct 03 19:37:39 no-preload-643397 crio[837]: time="2025-10-03T19:37:39.628274804Z" level=info msg="Creating container: default/busybox/busybox" id=26424023-3279-4675-b324-54c034e32b40 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:37:39 no-preload-643397 crio[837]: time="2025-10-03T19:37:39.629156064Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 19:37:39 no-preload-643397 crio[837]: time="2025-10-03T19:37:39.634059522Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 19:37:39 no-preload-643397 crio[837]: time="2025-10-03T19:37:39.634641242Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 19:37:39 no-preload-643397 crio[837]: time="2025-10-03T19:37:39.649861604Z" level=info msg="Created container c9e1a9809ee4990ac0998b0a5617acdcd10a81d02b2875e442049fde7782f0b1: default/busybox/busybox" id=26424023-3279-4675-b324-54c034e32b40 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:37:39 no-preload-643397 crio[837]: time="2025-10-03T19:37:39.653713042Z" level=info msg="Starting container: c9e1a9809ee4990ac0998b0a5617acdcd10a81d02b2875e442049fde7782f0b1" id=60963ef6-da88-46a1-9b6f-9f61a10229b8 name=/runtime.v1.RuntimeService/StartContainer
	Oct 03 19:37:39 no-preload-643397 crio[837]: time="2025-10-03T19:37:39.657614401Z" level=info msg="Started container" PID=2530 containerID=c9e1a9809ee4990ac0998b0a5617acdcd10a81d02b2875e442049fde7782f0b1 description=default/busybox/busybox id=60963ef6-da88-46a1-9b6f-9f61a10229b8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e622cf2d5fa70bfdc9a7caa8796fbecfe482725e80adb9d891d6e619f72ca74b
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	c9e1a9809ee49       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago       Running             busybox                   0                   e622cf2d5fa70       busybox                                     default
	38792b09c36c6       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      12 seconds ago      Running             coredns                   0                   28d4703b83d21       coredns-66bc5c9577-h8n5p                    kube-system
	29205abdea006       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                      12 seconds ago      Running             storage-provisioner       0                   2eb53e658829f       storage-provisioner                         kube-system
	dcc2ecc1dc1d3       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    24 seconds ago      Running             kindnet-cni               0                   b8489440db373       kindnet-7zwct                               kube-system
	82f23cf0f997c       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      27 seconds ago      Running             kube-proxy                0                   4aa41419ddcb1       kube-proxy-lcs2q                            kube-system
	34193cc62f161       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      42 seconds ago      Running             kube-controller-manager   0                   4bc60b12d4a45       kube-controller-manager-no-preload-643397   kube-system
	319886bb6d15c       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      42 seconds ago      Running             kube-scheduler            0                   4c6f1983e07f7       kube-scheduler-no-preload-643397            kube-system
	8fdd9b5dc923a       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      42 seconds ago      Running             kube-apiserver            0                   09f984c20f925       kube-apiserver-no-preload-643397            kube-system
	37bf662334e09       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      42 seconds ago      Running             etcd                      0                   564aaf201f3ea       etcd-no-preload-643397                      kube-system
	
	
	==> coredns [38792b09c36c6f720dcb4a60b61b1fc69f203ccd6c4400eadc781cf5e9096ed2] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38165 - 41754 "HINFO IN 6784707466068289290.5231679761351248507. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.025444695s
	
	
	==> describe nodes <==
	Name:               no-preload-643397
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-643397
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a43873c79fc22f8b1ccd29d3dfa635d392b09335
	                    minikube.k8s.io/name=no-preload-643397
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_03T19_37_14_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 03 Oct 2025 19:37:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-643397
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 03 Oct 2025 19:37:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 03 Oct 2025 19:37:45 +0000   Fri, 03 Oct 2025 19:37:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 03 Oct 2025 19:37:45 +0000   Fri, 03 Oct 2025 19:37:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 03 Oct 2025 19:37:45 +0000   Fri, 03 Oct 2025 19:37:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 03 Oct 2025 19:37:45 +0000   Fri, 03 Oct 2025 19:37:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-643397
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 47b8372bf8d449ebac4d129e0fd4e213
	  System UUID:                acffaaf4-a938-4dce-9b53-3c0346f455b4
	  Boot ID:                    3762136e-8bec-4104-a5cb-0b1976f6048e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-h8n5p                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     28s
	  kube-system                 etcd-no-preload-643397                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         33s
	  kube-system                 kindnet-7zwct                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-no-preload-643397             250m (12%)    0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-no-preload-643397    200m (10%)    0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-proxy-lcs2q                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-no-preload-643397             100m (5%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 26s   kube-proxy       
	  Normal   Starting                 34s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 34s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  33s   kubelet          Node no-preload-643397 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    33s   kubelet          Node no-preload-643397 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     33s   kubelet          Node no-preload-643397 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           29s   node-controller  Node no-preload-643397 event: Registered Node no-preload-643397 in Controller
	  Normal   NodeReady                14s   kubelet          Node no-preload-643397 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 3 19:07] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:08] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:09] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:10] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:11] overlayfs: idmapped layers are currently not supported
	[  +4.287643] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:12] overlayfs: idmapped layers are currently not supported
	[ +24.839009] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:13] overlayfs: idmapped layers are currently not supported
	[ +26.493253] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:15] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:16] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:17] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000010] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Oct 3 19:18] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:20] overlayfs: idmapped layers are currently not supported
	[ +32.018892] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:22] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:24] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:26] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:32] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:34] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:35] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:36] overlayfs: idmapped layers are currently not supported
	[  +4.740983] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [37bf662334e097a20652b0fef3b6f8c92bc0cd37e41f822aa9fd08fbadce2974] <==
	{"level":"warn","ts":"2025-10-03T19:37:07.937418Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:37:08.031447Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:37:08.074373Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:37:08.094310Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:37:08.121930Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:37:08.139471Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:37:08.171505Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:37:08.187674Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:37:08.211298Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:37:08.245149Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:37:08.284498Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:37:08.330696Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:37:08.366796Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:37:08.396787Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:37:08.428232Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:37:08.448426Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:37:08.500996Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:37:08.547771Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:37:08.580712Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:37:08.607036Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:37:08.651083Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:37:08.685169Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:37:08.720692Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:37:08.741539Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:37:08.884891Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60464","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 19:37:47 up  2:20,  0 user,  load average: 5.21, 2.57, 2.05
	Linux no-preload-643397 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [dcc2ecc1dc1d3404f878e7537bb11758b1ebd0a42557975918f11da0e3e3547a] <==
	I1003 19:37:23.102814       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1003 19:37:23.189314       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1003 19:37:23.189548       1 main.go:148] setting mtu 1500 for CNI 
	I1003 19:37:23.189570       1 main.go:178] kindnetd IP family: "ipv4"
	I1003 19:37:23.189776       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-03T19:37:23Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1003 19:37:23.391609       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1003 19:37:23.391675       1 controller.go:381] "Waiting for informer caches to sync"
	I1003 19:37:23.391713       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1003 19:37:23.392783       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1003 19:37:23.691939       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1003 19:37:23.691970       1 metrics.go:72] Registering metrics
	I1003 19:37:23.692053       1 controller.go:711] "Syncing nftables rules"
	I1003 19:37:33.392419       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1003 19:37:33.392472       1 main.go:301] handling current node
	I1003 19:37:43.392814       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1003 19:37:43.392848       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8fdd9b5dc923a58634190968c1565e2cc23352044188eb70c7dff0684685c6c5] <==
	I1003 19:37:10.558052       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1003 19:37:10.558724       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1003 19:37:10.558805       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1003 19:37:10.585917       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1003 19:37:10.595843       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1003 19:37:10.650214       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1003 19:37:10.666470       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1003 19:37:11.208531       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1003 19:37:11.221992       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1003 19:37:11.222237       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1003 19:37:12.237898       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1003 19:37:12.312226       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1003 19:37:12.504029       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1003 19:37:12.517017       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1003 19:37:12.518747       1 controller.go:667] quota admission added evaluator for: endpoints
	I1003 19:37:12.525669       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1003 19:37:13.350475       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1003 19:37:13.740204       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1003 19:37:13.814833       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1003 19:37:13.896459       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1003 19:37:19.037706       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1003 19:37:19.463405       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1003 19:37:19.506301       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1003 19:37:19.519207       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1003 19:37:45.455751       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:50024: use of closed network connection
	
	
	==> kube-controller-manager [34193cc62f161a3ea357d8b8ac650a36114ee3dccbbd0c820e7db41086c2daff] <==
	I1003 19:37:18.384279       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1003 19:37:18.384535       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1003 19:37:18.386415       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1003 19:37:18.387737       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1003 19:37:18.388984       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1003 19:37:18.389204       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1003 19:37:18.389278       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1003 19:37:18.389309       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1003 19:37:18.389357       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1003 19:37:18.389393       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1003 19:37:18.389402       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1003 19:37:18.389408       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1003 19:37:18.390002       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1003 19:37:18.390047       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1003 19:37:18.393284       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1003 19:37:18.397041       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1003 19:37:18.397337       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1003 19:37:18.398979       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-643397" podCIDRs=["10.244.0.0/24"]
	I1003 19:37:18.401438       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1003 19:37:18.410554       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1003 19:37:18.421101       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1003 19:37:18.421203       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1003 19:37:18.421310       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-643397"
	I1003 19:37:18.421358       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1003 19:37:38.424667       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [82f23cf0f997ca5473f8e60ea25081ffc77892a9c40d1fcd5d30829a40332b2d] <==
	I1003 19:37:20.288663       1 server_linux.go:53] "Using iptables proxy"
	I1003 19:37:20.580405       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1003 19:37:20.682684       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1003 19:37:20.682717       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1003 19:37:20.682795       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1003 19:37:20.813467       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1003 19:37:20.813523       1 server_linux.go:132] "Using iptables Proxier"
	I1003 19:37:20.831151       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1003 19:37:20.841202       1 server.go:527] "Version info" version="v1.34.1"
	I1003 19:37:20.841236       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1003 19:37:20.842912       1 config.go:200] "Starting service config controller"
	I1003 19:37:20.842924       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1003 19:37:20.842953       1 config.go:106] "Starting endpoint slice config controller"
	I1003 19:37:20.842958       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1003 19:37:20.843050       1 config.go:403] "Starting serviceCIDR config controller"
	I1003 19:37:20.843056       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1003 19:37:20.875378       1 config.go:309] "Starting node config controller"
	I1003 19:37:20.875670       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1003 19:37:20.875706       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1003 19:37:20.966088       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1003 19:37:21.046278       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1003 19:37:21.046322       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [319886bb6d15c6c0847d5a175267cd0a70a6a1fc7838cb7bb92d6bcb43485803] <==
	E1003 19:37:10.669228       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1003 19:37:10.669286       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1003 19:37:10.669368       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1003 19:37:10.669415       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1003 19:37:10.669480       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1003 19:37:10.669530       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1003 19:37:10.669576       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1003 19:37:10.669627       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1003 19:37:10.669679       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1003 19:37:10.669780       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1003 19:37:10.669841       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1003 19:37:10.669858       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1003 19:37:10.669914       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1003 19:37:11.598994       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1003 19:37:11.603014       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1003 19:37:11.648062       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1003 19:37:11.677900       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1003 19:37:11.677982       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1003 19:37:11.681695       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1003 19:37:11.697833       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1003 19:37:11.747477       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1003 19:37:11.769714       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1003 19:37:11.794624       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1003 19:37:11.975384       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1003 19:37:15.030066       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 03 19:37:18 no-preload-643397 kubelet[1986]: I1003 19:37:18.420481    1986 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 03 19:37:18 no-preload-643397 kubelet[1986]: I1003 19:37:18.421959    1986 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 03 19:37:19 no-preload-643397 kubelet[1986]: E1003 19:37:19.624272    1986 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-7zwct\" is forbidden: User \"system:node:no-preload-643397\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'no-preload-643397' and this object" podUID="bd0ecfeb-3764-425f-b7ae-e6f5b3e161d8" pod="kube-system/kindnet-7zwct"
	Oct 03 19:37:19 no-preload-643397 kubelet[1986]: I1003 19:37:19.655260    1986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/bd0ecfeb-3764-425f-b7ae-e6f5b3e161d8-cni-cfg\") pod \"kindnet-7zwct\" (UID: \"bd0ecfeb-3764-425f-b7ae-e6f5b3e161d8\") " pod="kube-system/kindnet-7zwct"
	Oct 03 19:37:19 no-preload-643397 kubelet[1986]: I1003 19:37:19.655304    1986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bd0ecfeb-3764-425f-b7ae-e6f5b3e161d8-lib-modules\") pod \"kindnet-7zwct\" (UID: \"bd0ecfeb-3764-425f-b7ae-e6f5b3e161d8\") " pod="kube-system/kindnet-7zwct"
	Oct 03 19:37:19 no-preload-643397 kubelet[1986]: I1003 19:37:19.655392    1986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gncwf\" (UniqueName: \"kubernetes.io/projected/bd0ecfeb-3764-425f-b7ae-e6f5b3e161d8-kube-api-access-gncwf\") pod \"kindnet-7zwct\" (UID: \"bd0ecfeb-3764-425f-b7ae-e6f5b3e161d8\") " pod="kube-system/kindnet-7zwct"
	Oct 03 19:37:19 no-preload-643397 kubelet[1986]: I1003 19:37:19.655506    1986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bd0ecfeb-3764-425f-b7ae-e6f5b3e161d8-xtables-lock\") pod \"kindnet-7zwct\" (UID: \"bd0ecfeb-3764-425f-b7ae-e6f5b3e161d8\") " pod="kube-system/kindnet-7zwct"
	Oct 03 19:37:19 no-preload-643397 kubelet[1986]: I1003 19:37:19.755718    1986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f25c0891-1202-477f-9cc9-5e41c3f1b9fb-lib-modules\") pod \"kube-proxy-lcs2q\" (UID: \"f25c0891-1202-477f-9cc9-5e41c3f1b9fb\") " pod="kube-system/kube-proxy-lcs2q"
	Oct 03 19:37:19 no-preload-643397 kubelet[1986]: I1003 19:37:19.755788    1986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f25c0891-1202-477f-9cc9-5e41c3f1b9fb-kube-proxy\") pod \"kube-proxy-lcs2q\" (UID: \"f25c0891-1202-477f-9cc9-5e41c3f1b9fb\") " pod="kube-system/kube-proxy-lcs2q"
	Oct 03 19:37:19 no-preload-643397 kubelet[1986]: I1003 19:37:19.755805    1986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f25c0891-1202-477f-9cc9-5e41c3f1b9fb-xtables-lock\") pod \"kube-proxy-lcs2q\" (UID: \"f25c0891-1202-477f-9cc9-5e41c3f1b9fb\") " pod="kube-system/kube-proxy-lcs2q"
	Oct 03 19:37:19 no-preload-643397 kubelet[1986]: I1003 19:37:19.755838    1986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fgbt\" (UniqueName: \"kubernetes.io/projected/f25c0891-1202-477f-9cc9-5e41c3f1b9fb-kube-api-access-7fgbt\") pod \"kube-proxy-lcs2q\" (UID: \"f25c0891-1202-477f-9cc9-5e41c3f1b9fb\") " pod="kube-system/kube-proxy-lcs2q"
	Oct 03 19:37:19 no-preload-643397 kubelet[1986]: I1003 19:37:19.790620    1986 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 03 19:37:19 no-preload-643397 kubelet[1986]: W1003 19:37:19.916432    1986 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/2ff626657df750cf9a1329bdf9d0fad13d27c9b5d259ea3feeee2866dd91e501/crio-b8489440db37311d736f3a78cef37542e40498d03586523a4411d088fbdd56bc WatchSource:0}: Error finding container b8489440db37311d736f3a78cef37542e40498d03586523a4411d088fbdd56bc: Status 404 returned error can't find the container with id b8489440db37311d736f3a78cef37542e40498d03586523a4411d088fbdd56bc
	Oct 03 19:37:19 no-preload-643397 kubelet[1986]: W1003 19:37:19.948434    1986 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/2ff626657df750cf9a1329bdf9d0fad13d27c9b5d259ea3feeee2866dd91e501/crio-4aa41419ddcb12f3cfdbecb8659efff6b16eecd194d5990fc4280bcc086acf6d WatchSource:0}: Error finding container 4aa41419ddcb12f3cfdbecb8659efff6b16eecd194d5990fc4280bcc086acf6d: Status 404 returned error can't find the container with id 4aa41419ddcb12f3cfdbecb8659efff6b16eecd194d5990fc4280bcc086acf6d
	Oct 03 19:37:20 no-preload-643397 kubelet[1986]: I1003 19:37:20.164604    1986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lcs2q" podStartSLOduration=1.164582161 podStartE2EDuration="1.164582161s" podCreationTimestamp="2025-10-03 19:37:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-03 19:37:20.099516186 +0000 UTC m=+6.460550004" watchObservedRunningTime="2025-10-03 19:37:20.164582161 +0000 UTC m=+6.525615971"
	Oct 03 19:37:23 no-preload-643397 kubelet[1986]: I1003 19:37:23.132355    1986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-7zwct" podStartSLOduration=1.041677135 podStartE2EDuration="4.132334721s" podCreationTimestamp="2025-10-03 19:37:19 +0000 UTC" firstStartedPulling="2025-10-03 19:37:19.921169761 +0000 UTC m=+6.282203571" lastFinishedPulling="2025-10-03 19:37:23.011827339 +0000 UTC m=+9.372861157" observedRunningTime="2025-10-03 19:37:23.113541039 +0000 UTC m=+9.474574857" watchObservedRunningTime="2025-10-03 19:37:23.132334721 +0000 UTC m=+9.493368539"
	Oct 03 19:37:33 no-preload-643397 kubelet[1986]: I1003 19:37:33.848677    1986 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 03 19:37:33 no-preload-643397 kubelet[1986]: I1003 19:37:33.971438    1986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/355c16e4-3158-4ffc-9379-57747ed71cca-tmp\") pod \"storage-provisioner\" (UID: \"355c16e4-3158-4ffc-9379-57747ed71cca\") " pod="kube-system/storage-provisioner"
	Oct 03 19:37:33 no-preload-643397 kubelet[1986]: I1003 19:37:33.971491    1986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcrz9\" (UniqueName: \"kubernetes.io/projected/355c16e4-3158-4ffc-9379-57747ed71cca-kube-api-access-wcrz9\") pod \"storage-provisioner\" (UID: \"355c16e4-3158-4ffc-9379-57747ed71cca\") " pod="kube-system/storage-provisioner"
	Oct 03 19:37:33 no-preload-643397 kubelet[1986]: I1003 19:37:33.971524    1986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7f4ec9d-9c68-4332-b6c7-e52f424dcd1e-config-volume\") pod \"coredns-66bc5c9577-h8n5p\" (UID: \"d7f4ec9d-9c68-4332-b6c7-e52f424dcd1e\") " pod="kube-system/coredns-66bc5c9577-h8n5p"
	Oct 03 19:37:33 no-preload-643397 kubelet[1986]: I1003 19:37:33.971545    1986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6cwc\" (UniqueName: \"kubernetes.io/projected/d7f4ec9d-9c68-4332-b6c7-e52f424dcd1e-kube-api-access-q6cwc\") pod \"coredns-66bc5c9577-h8n5p\" (UID: \"d7f4ec9d-9c68-4332-b6c7-e52f424dcd1e\") " pod="kube-system/coredns-66bc5c9577-h8n5p"
	Oct 03 19:37:35 no-preload-643397 kubelet[1986]: I1003 19:37:35.168316    1986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-h8n5p" podStartSLOduration=16.168286559 podStartE2EDuration="16.168286559s" podCreationTimestamp="2025-10-03 19:37:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-03 19:37:35.145243976 +0000 UTC m=+21.506277786" watchObservedRunningTime="2025-10-03 19:37:35.168286559 +0000 UTC m=+21.529320377"
	Oct 03 19:37:37 no-preload-643397 kubelet[1986]: I1003 19:37:37.236919    1986 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=17.236897878 podStartE2EDuration="17.236897878s" podCreationTimestamp="2025-10-03 19:37:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-03 19:37:35.197089811 +0000 UTC m=+21.558123629" watchObservedRunningTime="2025-10-03 19:37:37.236897878 +0000 UTC m=+23.597931688"
	Oct 03 19:37:37 no-preload-643397 kubelet[1986]: I1003 19:37:37.292470    1986 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84t2g\" (UniqueName: \"kubernetes.io/projected/94854f52-744f-499a-b87d-fc57eb32aae8-kube-api-access-84t2g\") pod \"busybox\" (UID: \"94854f52-744f-499a-b87d-fc57eb32aae8\") " pod="default/busybox"
	Oct 03 19:37:37 no-preload-643397 kubelet[1986]: W1003 19:37:37.588225    1986 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/2ff626657df750cf9a1329bdf9d0fad13d27c9b5d259ea3feeee2866dd91e501/crio-e622cf2d5fa70bfdc9a7caa8796fbecfe482725e80adb9d891d6e619f72ca74b WatchSource:0}: Error finding container e622cf2d5fa70bfdc9a7caa8796fbecfe482725e80adb9d891d6e619f72ca74b: Status 404 returned error can't find the container with id e622cf2d5fa70bfdc9a7caa8796fbecfe482725e80adb9d891d6e619f72ca74b
	
	
	==> storage-provisioner [29205abdea0065513a417d013101ce0e63a0d429f5a9bad90085d549769b0724] <==
	I1003 19:37:34.276882       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1003 19:37:34.295516       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1003 19:37:34.295566       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1003 19:37:34.298782       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:37:34.311136       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1003 19:37:34.311348       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1003 19:37:34.311597       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-643397_965e0330-7c96-46d3-88b3-986f9f475b99!
	I1003 19:37:34.312927       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0d558076-5928-4d46-b528-95f96636eae1", APIVersion:"v1", ResourceVersion:"414", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-643397_965e0330-7c96-46d3-88b3-986f9f475b99 became leader
	W1003 19:37:34.318805       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:37:34.322209       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1003 19:37:34.412320       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-643397_965e0330-7c96-46d3-88b3-986f9f475b99!
	W1003 19:37:36.325979       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:37:36.330758       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:37:38.337709       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:37:38.342500       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:37:40.345828       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:37:40.351017       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:37:42.354145       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:37:42.362504       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:37:44.365833       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:37:44.370822       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:37:46.374103       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:37:46.380478       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-643397 -n no-preload-643397
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-643397 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (3.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (6.55s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-643397 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p no-preload-643397 --alsologtostderr -v=1: exit status 80 (1.791126701s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-643397 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 19:39:16.199482  481965 out.go:360] Setting OutFile to fd 1 ...
	I1003 19:39:16.199655  481965 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 19:39:16.199667  481965 out.go:374] Setting ErrFile to fd 2...
	I1003 19:39:16.199673  481965 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 19:39:16.200009  481965 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-284583/.minikube/bin
	I1003 19:39:16.200326  481965 out.go:368] Setting JSON to false
	I1003 19:39:16.200374  481965 mustload.go:65] Loading cluster: no-preload-643397
	I1003 19:39:16.200862  481965 config.go:182] Loaded profile config "no-preload-643397": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 19:39:16.201414  481965 cli_runner.go:164] Run: docker container inspect no-preload-643397 --format={{.State.Status}}
	I1003 19:39:16.221004  481965 host.go:66] Checking if "no-preload-643397" exists ...
	I1003 19:39:16.221384  481965 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 19:39:16.281411  481965 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-03 19:39:16.271526971 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1003 19:39:16.282030  481965 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-643397 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1003 19:39:16.285492  481965 out.go:179] * Pausing node no-preload-643397 ... 
	I1003 19:39:16.288520  481965 host.go:66] Checking if "no-preload-643397" exists ...
	I1003 19:39:16.288974  481965 ssh_runner.go:195] Run: systemctl --version
	I1003 19:39:16.289031  481965 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-643397
	I1003 19:39:16.306456  481965 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/no-preload-643397/id_rsa Username:docker}
	I1003 19:39:16.407755  481965 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 19:39:16.421572  481965 pause.go:51] kubelet running: true
	I1003 19:39:16.421636  481965 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1003 19:39:16.672516  481965 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1003 19:39:16.672620  481965 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1003 19:39:16.741865  481965 cri.go:89] found id: "aa091721e2bf929a06f8f2a0382b1ac27830c5ef2bedaeb775f4567f2a80447c"
	I1003 19:39:16.741927  481965 cri.go:89] found id: "08858262c415390ebd844284cd70070377a032c8c9eb33572a8ede338609d2c5"
	I1003 19:39:16.741937  481965 cri.go:89] found id: "9a21627a747b30eb7424912a81297de7e4b519fb2f1252d457725408bd116383"
	I1003 19:39:16.741943  481965 cri.go:89] found id: "536d418166ee54c56a8550cc5c3e8e5c8328113ba2d06a9231fa1c71db5c6035"
	I1003 19:39:16.741956  481965 cri.go:89] found id: "3758592f491ab78c49e621316a06fabe1198eeb6f1be7d8ed8d05bc65d190237"
	I1003 19:39:16.741961  481965 cri.go:89] found id: "b652fe32e2a41b7f6685f05ea15d89051280d1a714c5ade044ee7267681f63c0"
	I1003 19:39:16.741964  481965 cri.go:89] found id: "812c215ff131175f339b6cce18e2749be199f4a5f61868272c2e91503fb4ccb8"
	I1003 19:39:16.741967  481965 cri.go:89] found id: "50b207c92dde75b009a0a2439f4af8008c52855e0ddbc54dcf57ab3bd1972302"
	I1003 19:39:16.741970  481965 cri.go:89] found id: "c2a31dbd1b598431e3e46d051690749feb66f319d34b0915aae14a51b8c1b0e2"
	I1003 19:39:16.741980  481965 cri.go:89] found id: "9e1e9b4fe19a20d0e1d02f1ab66d7f7479fb8f666b2994af5f888db15ff382d4"
	I1003 19:39:16.741986  481965 cri.go:89] found id: "8ed7a25aeb889c9f8a8428310aeb66737ce47377bcda2f1f2e1c8885151af962"
	I1003 19:39:16.741989  481965 cri.go:89] found id: ""
	I1003 19:39:16.742036  481965 ssh_runner.go:195] Run: sudo runc list -f json
	I1003 19:39:16.761338  481965 retry.go:31] will retry after 163.627859ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-03T19:39:16Z" level=error msg="open /run/runc: no such file or directory"
	I1003 19:39:16.925756  481965 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 19:39:16.938699  481965 pause.go:51] kubelet running: false
	I1003 19:39:16.938811  481965 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1003 19:39:17.130538  481965 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1003 19:39:17.130629  481965 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1003 19:39:17.206143  481965 cri.go:89] found id: "aa091721e2bf929a06f8f2a0382b1ac27830c5ef2bedaeb775f4567f2a80447c"
	I1003 19:39:17.206168  481965 cri.go:89] found id: "08858262c415390ebd844284cd70070377a032c8c9eb33572a8ede338609d2c5"
	I1003 19:39:17.206174  481965 cri.go:89] found id: "9a21627a747b30eb7424912a81297de7e4b519fb2f1252d457725408bd116383"
	I1003 19:39:17.206178  481965 cri.go:89] found id: "536d418166ee54c56a8550cc5c3e8e5c8328113ba2d06a9231fa1c71db5c6035"
	I1003 19:39:17.206181  481965 cri.go:89] found id: "3758592f491ab78c49e621316a06fabe1198eeb6f1be7d8ed8d05bc65d190237"
	I1003 19:39:17.206185  481965 cri.go:89] found id: "b652fe32e2a41b7f6685f05ea15d89051280d1a714c5ade044ee7267681f63c0"
	I1003 19:39:17.206188  481965 cri.go:89] found id: "812c215ff131175f339b6cce18e2749be199f4a5f61868272c2e91503fb4ccb8"
	I1003 19:39:17.206191  481965 cri.go:89] found id: "50b207c92dde75b009a0a2439f4af8008c52855e0ddbc54dcf57ab3bd1972302"
	I1003 19:39:17.206195  481965 cri.go:89] found id: "c2a31dbd1b598431e3e46d051690749feb66f319d34b0915aae14a51b8c1b0e2"
	I1003 19:39:17.206204  481965 cri.go:89] found id: "9e1e9b4fe19a20d0e1d02f1ab66d7f7479fb8f666b2994af5f888db15ff382d4"
	I1003 19:39:17.206212  481965 cri.go:89] found id: "8ed7a25aeb889c9f8a8428310aeb66737ce47377bcda2f1f2e1c8885151af962"
	I1003 19:39:17.206215  481965 cri.go:89] found id: ""
	I1003 19:39:17.206263  481965 ssh_runner.go:195] Run: sudo runc list -f json
	I1003 19:39:17.217674  481965 retry.go:31] will retry after 425.436976ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-03T19:39:17Z" level=error msg="open /run/runc: no such file or directory"
	I1003 19:39:17.643313  481965 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 19:39:17.656959  481965 pause.go:51] kubelet running: false
	I1003 19:39:17.657053  481965 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1003 19:39:17.836014  481965 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1003 19:39:17.836124  481965 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1003 19:39:17.910207  481965 cri.go:89] found id: "aa091721e2bf929a06f8f2a0382b1ac27830c5ef2bedaeb775f4567f2a80447c"
	I1003 19:39:17.910231  481965 cri.go:89] found id: "08858262c415390ebd844284cd70070377a032c8c9eb33572a8ede338609d2c5"
	I1003 19:39:17.910236  481965 cri.go:89] found id: "9a21627a747b30eb7424912a81297de7e4b519fb2f1252d457725408bd116383"
	I1003 19:39:17.910240  481965 cri.go:89] found id: "536d418166ee54c56a8550cc5c3e8e5c8328113ba2d06a9231fa1c71db5c6035"
	I1003 19:39:17.910243  481965 cri.go:89] found id: "3758592f491ab78c49e621316a06fabe1198eeb6f1be7d8ed8d05bc65d190237"
	I1003 19:39:17.910247  481965 cri.go:89] found id: "b652fe32e2a41b7f6685f05ea15d89051280d1a714c5ade044ee7267681f63c0"
	I1003 19:39:17.910281  481965 cri.go:89] found id: "812c215ff131175f339b6cce18e2749be199f4a5f61868272c2e91503fb4ccb8"
	I1003 19:39:17.910293  481965 cri.go:89] found id: "50b207c92dde75b009a0a2439f4af8008c52855e0ddbc54dcf57ab3bd1972302"
	I1003 19:39:17.910297  481965 cri.go:89] found id: "c2a31dbd1b598431e3e46d051690749feb66f319d34b0915aae14a51b8c1b0e2"
	I1003 19:39:17.910304  481965 cri.go:89] found id: "9e1e9b4fe19a20d0e1d02f1ab66d7f7479fb8f666b2994af5f888db15ff382d4"
	I1003 19:39:17.910313  481965 cri.go:89] found id: "8ed7a25aeb889c9f8a8428310aeb66737ce47377bcda2f1f2e1c8885151af962"
	I1003 19:39:17.910317  481965 cri.go:89] found id: ""
	I1003 19:39:17.910380  481965 ssh_runner.go:195] Run: sudo runc list -f json
	I1003 19:39:17.924925  481965 out.go:203] 
	W1003 19:39:17.927837  481965 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-03T19:39:17Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-03T19:39:17Z" level=error msg="open /run/runc: no such file or directory"
	
	W1003 19:39:17.927862  481965 out.go:285] * 
	* 
	W1003 19:39:17.934819  481965 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 19:39:17.939605  481965 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p no-preload-643397 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-643397
helpers_test.go:243: (dbg) docker inspect no-preload-643397:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2ff626657df750cf9a1329bdf9d0fad13d27c9b5d259ea3feeee2866dd91e501",
	        "Created": "2025-10-03T19:36:25.722491125Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 478544,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-03T19:38:01.341398026Z",
	            "FinishedAt": "2025-10-03T19:38:00.366153143Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/2ff626657df750cf9a1329bdf9d0fad13d27c9b5d259ea3feeee2866dd91e501/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2ff626657df750cf9a1329bdf9d0fad13d27c9b5d259ea3feeee2866dd91e501/hostname",
	        "HostsPath": "/var/lib/docker/containers/2ff626657df750cf9a1329bdf9d0fad13d27c9b5d259ea3feeee2866dd91e501/hosts",
	        "LogPath": "/var/lib/docker/containers/2ff626657df750cf9a1329bdf9d0fad13d27c9b5d259ea3feeee2866dd91e501/2ff626657df750cf9a1329bdf9d0fad13d27c9b5d259ea3feeee2866dd91e501-json.log",
	        "Name": "/no-preload-643397",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-643397:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-643397",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2ff626657df750cf9a1329bdf9d0fad13d27c9b5d259ea3feeee2866dd91e501",
	                "LowerDir": "/var/lib/docker/overlay2/75229aada1a7c5cdb860071c36cb7ed171994b4cb8c1ec0abce827b45a7f840c-init/diff:/var/lib/docker/overlay2/87b205803817b0b71a214d995ab7e10a92033bbf72d76d6e052f1d21ccecb313/diff",
	                "MergedDir": "/var/lib/docker/overlay2/75229aada1a7c5cdb860071c36cb7ed171994b4cb8c1ec0abce827b45a7f840c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/75229aada1a7c5cdb860071c36cb7ed171994b4cb8c1ec0abce827b45a7f840c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/75229aada1a7c5cdb860071c36cb7ed171994b4cb8c1ec0abce827b45a7f840c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-643397",
	                "Source": "/var/lib/docker/volumes/no-preload-643397/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-643397",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-643397",
	                "name.minikube.sigs.k8s.io": "no-preload-643397",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c3258dbab0862e75fede7d1477febb5b523c6d2e4293667abc9a871b84cc4470",
	            "SandboxKey": "/var/run/docker/netns/c3258dbab086",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33438"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33439"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33442"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33440"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33441"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-643397": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3a:3f:19:06:81:d6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f8dcbeddfcb1aa31ce25637ca1a7b831d4c9bab55d750a9a6b43e000061a3784",
	                    "EndpointID": "b5b7be564bb38f7cbbb6c10acb413cea9545fae3c40093044c46007b0a138ce8",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-643397",
	                        "2ff626657df7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-643397 -n no-preload-643397
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-643397 -n no-preload-643397: exit status 2 (348.725812ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-643397 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-643397 logs -n 25: (1.408656442s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ start   │ -p cert-expiration-324520 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-324520   │ jenkins │ v1.37.0 │ 03 Oct 25 19:32 UTC │ 03 Oct 25 19:33 UTC │
	│ delete  │ -p force-systemd-env-159095                                                                                                                                                                                                                   │ force-systemd-env-159095 │ jenkins │ v1.37.0 │ 03 Oct 25 19:34 UTC │ 03 Oct 25 19:34 UTC │
	│ start   │ -p cert-options-305866 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-305866      │ jenkins │ v1.37.0 │ 03 Oct 25 19:34 UTC │ 03 Oct 25 19:34 UTC │
	│ ssh     │ cert-options-305866 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-305866      │ jenkins │ v1.37.0 │ 03 Oct 25 19:34 UTC │ 03 Oct 25 19:34 UTC │
	│ ssh     │ -p cert-options-305866 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-305866      │ jenkins │ v1.37.0 │ 03 Oct 25 19:34 UTC │ 03 Oct 25 19:34 UTC │
	│ delete  │ -p cert-options-305866                                                                                                                                                                                                                        │ cert-options-305866      │ jenkins │ v1.37.0 │ 03 Oct 25 19:34 UTC │ 03 Oct 25 19:35 UTC │
	│ start   │ -p old-k8s-version-174543 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-174543   │ jenkins │ v1.37.0 │ 03 Oct 25 19:35 UTC │ 03 Oct 25 19:36 UTC │
	│ start   │ -p cert-expiration-324520 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-324520   │ jenkins │ v1.37.0 │ 03 Oct 25 19:36 UTC │ 03 Oct 25 19:36 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-174543 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-174543   │ jenkins │ v1.37.0 │ 03 Oct 25 19:36 UTC │                     │
	│ stop    │ -p old-k8s-version-174543 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-174543   │ jenkins │ v1.37.0 │ 03 Oct 25 19:36 UTC │ 03 Oct 25 19:36 UTC │
	│ delete  │ -p cert-expiration-324520                                                                                                                                                                                                                     │ cert-expiration-324520   │ jenkins │ v1.37.0 │ 03 Oct 25 19:36 UTC │ 03 Oct 25 19:36 UTC │
	│ start   │ -p no-preload-643397 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-643397        │ jenkins │ v1.37.0 │ 03 Oct 25 19:36 UTC │ 03 Oct 25 19:37 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-174543 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-174543   │ jenkins │ v1.37.0 │ 03 Oct 25 19:36 UTC │ 03 Oct 25 19:36 UTC │
	│ start   │ -p old-k8s-version-174543 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-174543   │ jenkins │ v1.37.0 │ 03 Oct 25 19:36 UTC │ 03 Oct 25 19:37 UTC │
	│ image   │ old-k8s-version-174543 image list --format=json                                                                                                                                                                                               │ old-k8s-version-174543   │ jenkins │ v1.37.0 │ 03 Oct 25 19:37 UTC │ 03 Oct 25 19:37 UTC │
	│ pause   │ -p old-k8s-version-174543 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-174543   │ jenkins │ v1.37.0 │ 03 Oct 25 19:37 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-643397 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-643397        │ jenkins │ v1.37.0 │ 03 Oct 25 19:37 UTC │                     │
	│ stop    │ -p no-preload-643397 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-643397        │ jenkins │ v1.37.0 │ 03 Oct 25 19:37 UTC │ 03 Oct 25 19:38 UTC │
	│ delete  │ -p old-k8s-version-174543                                                                                                                                                                                                                     │ old-k8s-version-174543   │ jenkins │ v1.37.0 │ 03 Oct 25 19:37 UTC │ 03 Oct 25 19:37 UTC │
	│ delete  │ -p old-k8s-version-174543                                                                                                                                                                                                                     │ old-k8s-version-174543   │ jenkins │ v1.37.0 │ 03 Oct 25 19:37 UTC │ 03 Oct 25 19:37 UTC │
	│ start   │ -p embed-certs-327416 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-327416       │ jenkins │ v1.37.0 │ 03 Oct 25 19:37 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-643397 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-643397        │ jenkins │ v1.37.0 │ 03 Oct 25 19:38 UTC │ 03 Oct 25 19:38 UTC │
	│ start   │ -p no-preload-643397 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-643397        │ jenkins │ v1.37.0 │ 03 Oct 25 19:38 UTC │ 03 Oct 25 19:39 UTC │
	│ image   │ no-preload-643397 image list --format=json                                                                                                                                                                                                    │ no-preload-643397        │ jenkins │ v1.37.0 │ 03 Oct 25 19:39 UTC │ 03 Oct 25 19:39 UTC │
	│ pause   │ -p no-preload-643397 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-643397        │ jenkins │ v1.37.0 │ 03 Oct 25 19:39 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/03 19:38:00
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 19:38:00.977951  478234 out.go:360] Setting OutFile to fd 1 ...
	I1003 19:38:00.978182  478234 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 19:38:00.978206  478234 out.go:374] Setting ErrFile to fd 2...
	I1003 19:38:00.978227  478234 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 19:38:00.978509  478234 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-284583/.minikube/bin
	I1003 19:38:00.978893  478234 out.go:368] Setting JSON to false
	I1003 19:38:00.979795  478234 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8432,"bootTime":1759511849,"procs":166,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1003 19:38:00.979893  478234 start.go:140] virtualization:  
	I1003 19:38:00.984093  478234 out.go:179] * [no-preload-643397] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1003 19:38:00.988236  478234 out.go:179]   - MINIKUBE_LOCATION=21625
	I1003 19:38:00.988308  478234 notify.go:220] Checking for updates...
	I1003 19:38:00.996960  478234 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 19:38:01.001082  478234 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21625-284583/kubeconfig
	I1003 19:38:01.004999  478234 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-284583/.minikube
	I1003 19:38:01.009272  478234 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1003 19:38:01.011489  478234 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 19:38:01.014997  478234 config.go:182] Loaded profile config "no-preload-643397": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 19:38:01.015564  478234 driver.go:421] Setting default libvirt URI to qemu:///system
	I1003 19:38:01.050661  478234 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1003 19:38:01.050815  478234 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 19:38:01.145976  478234 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-03 19:38:01.134806253 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1003 19:38:01.146091  478234 docker.go:318] overlay module found
	I1003 19:38:01.149984  478234 out.go:179] * Using the docker driver based on existing profile
	I1003 19:38:01.152101  478234 start.go:304] selected driver: docker
	I1003 19:38:01.152118  478234 start.go:924] validating driver "docker" against &{Name:no-preload-643397 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-643397 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 19:38:01.152228  478234 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 19:38:01.153245  478234 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 19:38:01.239818  478234 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-03 19:38:01.229023714 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1003 19:38:01.240177  478234 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 19:38:01.240200  478234 cni.go:84] Creating CNI manager for ""
	I1003 19:38:01.240263  478234 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 19:38:01.240297  478234 start.go:348] cluster config:
	{Name:no-preload-643397 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-643397 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 19:38:01.243727  478234 out.go:179] * Starting "no-preload-643397" primary control-plane node in "no-preload-643397" cluster
	I1003 19:38:01.245969  478234 cache.go:123] Beginning downloading kic base image for docker with crio
	I1003 19:38:01.249090  478234 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1003 19:38:01.252854  478234 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 19:38:01.252944  478234 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1003 19:38:01.253020  478234 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/config.json ...
	I1003 19:38:01.253423  478234 cache.go:107] acquiring lock: {Name:mk7cc8e90392b121da3fc2fa2839cd90be030987 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 19:38:01.253520  478234 cache.go:115] /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1003 19:38:01.253535  478234 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 115.94µs
	I1003 19:38:01.253553  478234 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1003 19:38:01.253570  478234 cache.go:107] acquiring lock: {Name:mk629d4402b8cf97e7e7b39bf007d7d385cd74c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 19:38:01.253607  478234 cache.go:115] /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1003 19:38:01.253618  478234 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 50.06µs
	I1003 19:38:01.253624  478234 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1003 19:38:01.253633  478234 cache.go:107] acquiring lock: {Name:mkd2a56be71d53969ad5666736c12fa03b4cc23b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 19:38:01.253666  478234 cache.go:115] /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1003 19:38:01.253676  478234 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 42.946µs
	I1003 19:38:01.253682  478234 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1003 19:38:01.253692  478234 cache.go:107] acquiring lock: {Name:mk92106990cd186a73d6cc849d81383dcc3cef29 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 19:38:01.253723  478234 cache.go:115] /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1003 19:38:01.253735  478234 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 43.643µs
	I1003 19:38:01.253741  478234 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1003 19:38:01.253750  478234 cache.go:107] acquiring lock: {Name:mkaa4b85211ddf86dbb4a58ea6b27051e9e3e961 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 19:38:01.253776  478234 cache.go:115] /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1003 19:38:01.253787  478234 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 37.129µs
	I1003 19:38:01.253793  478234 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1003 19:38:01.253802  478234 cache.go:107] acquiring lock: {Name:mkb05875322f2d80de3e0a433e30c3b3e43961f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 19:38:01.253842  478234 cache.go:115] /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1003 19:38:01.253851  478234 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 50.569µs
	I1003 19:38:01.253862  478234 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1003 19:38:01.253875  478234 cache.go:107] acquiring lock: {Name:mkf5fb1b6792a0e71c262e68ff69fb567f93ebde Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 19:38:01.253902  478234 cache.go:115] /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1003 19:38:01.253912  478234 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 37.867µs
	I1003 19:38:01.253918  478234 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1003 19:38:01.253285  478234 cache.go:107] acquiring lock: {Name:mk83e5b24e5c429aa699dd46e8de74a53fff017f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 19:38:01.253950  478234 cache.go:115] /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1003 19:38:01.253959  478234 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 687.263µs
	I1003 19:38:01.253965  478234 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1003 19:38:01.253971  478234 cache.go:87] Successfully saved all images to host disk.
	I1003 19:38:01.281304  478234 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1003 19:38:01.281325  478234 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1003 19:38:01.281339  478234 cache.go:232] Successfully downloaded all kic artifacts
	I1003 19:38:01.281362  478234 start.go:360] acquireMachinesLock for no-preload-643397: {Name:mkd464eef28f143df6be9e03c4b51988b6ba8cf8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 19:38:01.281414  478234 start.go:364] duration metric: took 35.799µs to acquireMachinesLock for "no-preload-643397"
	I1003 19:38:01.281434  478234 start.go:96] Skipping create...Using existing machine configuration
	I1003 19:38:01.281439  478234 fix.go:54] fixHost starting: 
	I1003 19:38:01.281704  478234 cli_runner.go:164] Run: docker container inspect no-preload-643397 --format={{.State.Status}}
	I1003 19:38:01.301841  478234 fix.go:112] recreateIfNeeded on no-preload-643397: state=Stopped err=<nil>
	W1003 19:38:01.301870  478234 fix.go:138] unexpected machine state, will restart: <nil>
	I1003 19:37:58.343732  477208 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21625-284583/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-327416:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.468846269s)
	I1003 19:37:58.343781  477208 kic.go:203] duration metric: took 4.469021484s to extract preloaded images to volume ...
	W1003 19:37:58.343932  477208 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1003 19:37:58.344051  477208 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1003 19:37:58.397758  477208 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-327416 --name embed-certs-327416 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-327416 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-327416 --network embed-certs-327416 --ip 192.168.85.2 --volume embed-certs-327416:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1003 19:37:58.716391  477208 cli_runner.go:164] Run: docker container inspect embed-certs-327416 --format={{.State.Running}}
	I1003 19:37:58.738386  477208 cli_runner.go:164] Run: docker container inspect embed-certs-327416 --format={{.State.Status}}
	I1003 19:37:58.762069  477208 cli_runner.go:164] Run: docker exec embed-certs-327416 stat /var/lib/dpkg/alternatives/iptables
	I1003 19:37:58.811282  477208 oci.go:144] the created container "embed-certs-327416" has a running status.
	I1003 19:37:58.811313  477208 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21625-284583/.minikube/machines/embed-certs-327416/id_rsa...
	I1003 19:37:59.394289  477208 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21625-284583/.minikube/machines/embed-certs-327416/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1003 19:37:59.412251  477208 cli_runner.go:164] Run: docker container inspect embed-certs-327416 --format={{.State.Status}}
	I1003 19:37:59.428450  477208 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1003 19:37:59.428468  477208 kic_runner.go:114] Args: [docker exec --privileged embed-certs-327416 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1003 19:37:59.469854  477208 cli_runner.go:164] Run: docker container inspect embed-certs-327416 --format={{.State.Status}}
	I1003 19:37:59.487325  477208 machine.go:93] provisionDockerMachine start ...
	I1003 19:37:59.487421  477208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-327416
	I1003 19:37:59.504290  477208 main.go:141] libmachine: Using SSH client type: native
	I1003 19:37:59.504617  477208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1003 19:37:59.504634  477208 main.go:141] libmachine: About to run SSH command:
	hostname
	I1003 19:37:59.505271  477208 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58274->127.0.0.1:33433: read: connection reset by peer
	I1003 19:38:02.636645  477208 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-327416
	
	I1003 19:38:02.636668  477208 ubuntu.go:182] provisioning hostname "embed-certs-327416"
	I1003 19:38:02.636791  477208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-327416
	I1003 19:38:02.655162  477208 main.go:141] libmachine: Using SSH client type: native
	I1003 19:38:02.655472  477208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1003 19:38:02.655484  477208 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-327416 && echo "embed-certs-327416" | sudo tee /etc/hostname
	I1003 19:38:02.802630  477208 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-327416
	
	I1003 19:38:02.802771  477208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-327416
	I1003 19:38:02.820286  477208 main.go:141] libmachine: Using SSH client type: native
	I1003 19:38:02.820619  477208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1003 19:38:02.820636  477208 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-327416' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-327416/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-327416' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 19:38:02.953441  477208 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 19:38:02.953472  477208 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21625-284583/.minikube CaCertPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21625-284583/.minikube}
	I1003 19:38:02.953495  477208 ubuntu.go:190] setting up certificates
	I1003 19:38:02.953504  477208 provision.go:84] configureAuth start
	I1003 19:38:02.953562  477208 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-327416
	I1003 19:38:01.306879  478234 out.go:252] * Restarting existing docker container for "no-preload-643397" ...
	I1003 19:38:01.306996  478234 cli_runner.go:164] Run: docker start no-preload-643397
	I1003 19:38:01.577099  478234 cli_runner.go:164] Run: docker container inspect no-preload-643397 --format={{.State.Status}}
	I1003 19:38:01.600399  478234 kic.go:430] container "no-preload-643397" state is running.
	I1003 19:38:01.600869  478234 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-643397
	I1003 19:38:01.623384  478234 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/config.json ...
	I1003 19:38:01.623615  478234 machine.go:93] provisionDockerMachine start ...
	I1003 19:38:01.623674  478234 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-643397
	I1003 19:38:01.644440  478234 main.go:141] libmachine: Using SSH client type: native
	I1003 19:38:01.644946  478234 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1003 19:38:01.644962  478234 main.go:141] libmachine: About to run SSH command:
	hostname
	I1003 19:38:01.645500  478234 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50230->127.0.0.1:33438: read: connection reset by peer
	I1003 19:38:04.790450  478234 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-643397
	
	I1003 19:38:04.790483  478234 ubuntu.go:182] provisioning hostname "no-preload-643397"
	I1003 19:38:04.790591  478234 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-643397
	I1003 19:38:04.819065  478234 main.go:141] libmachine: Using SSH client type: native
	I1003 19:38:04.819379  478234 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1003 19:38:04.819391  478234 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-643397 && echo "no-preload-643397" | sudo tee /etc/hostname
	I1003 19:38:04.982306  478234 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-643397
	
	I1003 19:38:04.982489  478234 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-643397
	I1003 19:38:05.020034  478234 main.go:141] libmachine: Using SSH client type: native
	I1003 19:38:05.020343  478234 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1003 19:38:05.020360  478234 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-643397' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-643397/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-643397' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 19:38:05.169484  478234 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 19:38:05.169514  478234 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21625-284583/.minikube CaCertPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21625-284583/.minikube}
	I1003 19:38:05.169539  478234 ubuntu.go:190] setting up certificates
	I1003 19:38:05.169549  478234 provision.go:84] configureAuth start
	I1003 19:38:05.169613  478234 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-643397
	I1003 19:38:05.193735  478234 provision.go:143] copyHostCerts
	I1003 19:38:05.193803  478234 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-284583/.minikube/ca.pem, removing ...
	I1003 19:38:05.193823  478234 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-284583/.minikube/ca.pem
	I1003 19:38:05.193898  478234 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21625-284583/.minikube/ca.pem (1082 bytes)
	I1003 19:38:05.194000  478234 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-284583/.minikube/cert.pem, removing ...
	I1003 19:38:05.194011  478234 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-284583/.minikube/cert.pem
	I1003 19:38:05.194039  478234 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21625-284583/.minikube/cert.pem (1123 bytes)
	I1003 19:38:05.194095  478234 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-284583/.minikube/key.pem, removing ...
	I1003 19:38:05.194105  478234 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-284583/.minikube/key.pem
	I1003 19:38:05.194130  478234 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21625-284583/.minikube/key.pem (1675 bytes)
	I1003 19:38:05.194183  478234 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21625-284583/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca-key.pem org=jenkins.no-preload-643397 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-643397]
	I1003 19:38:02.983464  477208 provision.go:143] copyHostCerts
	I1003 19:38:02.983525  477208 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-284583/.minikube/ca.pem, removing ...
	I1003 19:38:02.983543  477208 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-284583/.minikube/ca.pem
	I1003 19:38:02.983615  477208 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21625-284583/.minikube/ca.pem (1082 bytes)
	I1003 19:38:02.983703  477208 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-284583/.minikube/cert.pem, removing ...
	I1003 19:38:02.983715  477208 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-284583/.minikube/cert.pem
	I1003 19:38:02.983742  477208 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21625-284583/.minikube/cert.pem (1123 bytes)
	I1003 19:38:02.983807  477208 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-284583/.minikube/key.pem, removing ...
	I1003 19:38:02.983817  477208 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-284583/.minikube/key.pem
	I1003 19:38:02.983841  477208 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21625-284583/.minikube/key.pem (1675 bytes)
	I1003 19:38:02.983896  477208 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21625-284583/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca-key.pem org=jenkins.embed-certs-327416 san=[127.0.0.1 192.168.85.2 embed-certs-327416 localhost minikube]
	I1003 19:38:04.602458  477208 provision.go:177] copyRemoteCerts
	I1003 19:38:04.602531  477208 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 19:38:04.602598  477208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-327416
	I1003 19:38:04.619970  477208 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/embed-certs-327416/id_rsa Username:docker}
	I1003 19:38:04.717872  477208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 19:38:04.744810  477208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1003 19:38:04.763167  477208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1003 19:38:04.781917  477208 provision.go:87] duration metric: took 1.828388937s to configureAuth
	I1003 19:38:04.781946  477208 ubuntu.go:206] setting minikube options for container-runtime
	I1003 19:38:04.782186  477208 config.go:182] Loaded profile config "embed-certs-327416": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 19:38:04.782330  477208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-327416
	I1003 19:38:04.804184  477208 main.go:141] libmachine: Using SSH client type: native
	I1003 19:38:04.804499  477208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1003 19:38:04.804514  477208 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1003 19:38:05.199104  477208 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1003 19:38:05.199130  477208 machine.go:96] duration metric: took 5.711781672s to provisionDockerMachine
	I1003 19:38:05.199141  477208 client.go:171] duration metric: took 12.008385661s to LocalClient.Create
	I1003 19:38:05.199155  477208 start.go:167] duration metric: took 12.008453452s to libmachine.API.Create "embed-certs-327416"
	I1003 19:38:05.199163  477208 start.go:293] postStartSetup for "embed-certs-327416" (driver="docker")
	I1003 19:38:05.199173  477208 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 19:38:05.199242  477208 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 19:38:05.199295  477208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-327416
	I1003 19:38:05.223831  477208 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/embed-certs-327416/id_rsa Username:docker}
	I1003 19:38:05.323026  477208 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 19:38:05.326838  477208 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1003 19:38:05.326867  477208 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1003 19:38:05.326879  477208 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-284583/.minikube/addons for local assets ...
	I1003 19:38:05.326935  477208 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-284583/.minikube/files for local assets ...
	I1003 19:38:05.327023  477208 filesync.go:149] local asset: /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem -> 2864342.pem in /etc/ssl/certs
	I1003 19:38:05.327134  477208 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 19:38:05.339068  477208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem --> /etc/ssl/certs/2864342.pem (1708 bytes)
	I1003 19:38:05.359609  477208 start.go:296] duration metric: took 160.431486ms for postStartSetup
	I1003 19:38:05.360036  477208 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-327416
	I1003 19:38:05.394480  477208 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/embed-certs-327416/config.json ...
	I1003 19:38:05.394774  477208 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 19:38:05.394828  477208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-327416
	I1003 19:38:05.423150  477208 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/embed-certs-327416/id_rsa Username:docker}
	I1003 19:38:05.518231  477208 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1003 19:38:05.523626  477208 start.go:128] duration metric: took 12.336512765s to createHost
	I1003 19:38:05.523653  477208 start.go:83] releasing machines lock for "embed-certs-327416", held for 12.336644238s
	I1003 19:38:05.523729  477208 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-327416
	I1003 19:38:05.548335  477208 ssh_runner.go:195] Run: cat /version.json
	I1003 19:38:05.548397  477208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-327416
	I1003 19:38:05.548648  477208 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 19:38:05.548708  477208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-327416
	I1003 19:38:05.573848  477208 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/embed-certs-327416/id_rsa Username:docker}
	I1003 19:38:05.583398  477208 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/embed-certs-327416/id_rsa Username:docker}
	I1003 19:38:05.788861  477208 ssh_runner.go:195] Run: systemctl --version
	I1003 19:38:05.796527  477208 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1003 19:38:05.847978  477208 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1003 19:38:05.852759  477208 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 19:38:05.852835  477208 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 19:38:05.886640  477208 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1003 19:38:05.886665  477208 start.go:495] detecting cgroup driver to use...
	I1003 19:38:05.886698  477208 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1003 19:38:05.886752  477208 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 19:38:05.908312  477208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 19:38:05.923274  477208 docker.go:218] disabling cri-docker service (if available) ...
	I1003 19:38:05.923343  477208 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1003 19:38:05.940676  477208 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1003 19:38:05.960884  477208 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1003 19:38:06.110119  477208 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1003 19:38:06.271901  477208 docker.go:234] disabling docker service ...
	I1003 19:38:06.271973  477208 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1003 19:38:06.310157  477208 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1003 19:38:06.325131  477208 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1003 19:38:06.468782  477208 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1003 19:38:06.619698  477208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1003 19:38:06.639658  477208 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 19:38:06.654816  477208 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1003 19:38:06.654894  477208 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:38:06.664368  477208 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1003 19:38:06.664451  477208 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:38:06.673782  477208 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:38:06.682741  477208 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:38:06.692096  477208 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 19:38:06.700667  477208 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:38:06.709560  477208 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:38:06.724108  477208 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:38:06.733573  477208 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 19:38:06.742476  477208 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 19:38:06.750697  477208 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 19:38:06.892627  477208 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1003 19:38:07.049117  477208 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1003 19:38:07.049188  477208 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1003 19:38:07.055839  477208 start.go:563] Will wait 60s for crictl version
	I1003 19:38:07.055906  477208 ssh_runner.go:195] Run: which crictl
	I1003 19:38:07.059358  477208 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1003 19:38:07.087959  477208 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1003 19:38:07.088042  477208 ssh_runner.go:195] Run: crio --version
	I1003 19:38:07.122269  477208 ssh_runner.go:195] Run: crio --version
	I1003 19:38:07.163347  477208 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1003 19:38:07.166346  477208 cli_runner.go:164] Run: docker network inspect embed-certs-327416 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 19:38:07.192250  477208 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1003 19:38:07.196293  477208 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 19:38:07.209532  477208 kubeadm.go:883] updating cluster {Name:embed-certs-327416 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-327416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1003 19:38:07.209643  477208 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 19:38:07.209706  477208 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 19:38:07.251767  477208 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 19:38:07.251797  477208 crio.go:433] Images already preloaded, skipping extraction
	I1003 19:38:07.251852  477208 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 19:38:07.288163  477208 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 19:38:07.288187  477208 cache_images.go:85] Images are preloaded, skipping loading
	I1003 19:38:07.288196  477208 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1003 19:38:07.288300  477208 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-327416 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-327416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1003 19:38:07.288381  477208 ssh_runner.go:195] Run: crio config
	I1003 19:38:07.380945  477208 cni.go:84] Creating CNI manager for ""
	I1003 19:38:07.380969  477208 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 19:38:07.381013  477208 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1003 19:38:07.381042  477208 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-327416 NodeName:embed-certs-327416 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1003 19:38:07.381223  477208 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-327416"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 19:38:07.381325  477208 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1003 19:38:07.389811  477208 binaries.go:44] Found k8s binaries, skipping transfer
	I1003 19:38:07.389891  477208 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1003 19:38:07.410258  477208 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1003 19:38:07.434409  477208 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 19:38:07.449633  477208 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1003 19:38:07.468204  477208 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1003 19:38:07.472607  477208 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 19:38:07.482380  477208 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 19:38:07.638161  477208 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 19:38:07.655466  477208 certs.go:69] Setting up /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/embed-certs-327416 for IP: 192.168.85.2
	I1003 19:38:07.655485  477208 certs.go:195] generating shared ca certs ...
	I1003 19:38:07.655501  477208 certs.go:227] acquiring lock for ca certs: {Name:mk5a10e6c921326e9c211447576eaeb893259ba7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:38:07.655634  477208 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21625-284583/.minikube/ca.key
	I1003 19:38:07.655671  477208 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.key
	I1003 19:38:07.655678  477208 certs.go:257] generating profile certs ...
	I1003 19:38:07.655731  477208 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/embed-certs-327416/client.key
	I1003 19:38:07.655744  477208 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/embed-certs-327416/client.crt with IP's: []
	I1003 19:38:06.808389  478234 provision.go:177] copyRemoteCerts
	I1003 19:38:06.808503  478234 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 19:38:06.808589  478234 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-643397
	I1003 19:38:06.846599  478234 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/no-preload-643397/id_rsa Username:docker}
	I1003 19:38:06.945720  478234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 19:38:06.965381  478234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1003 19:38:06.985698  478234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1003 19:38:07.008639  478234 provision.go:87] duration metric: took 1.83907464s to configureAuth
	I1003 19:38:07.008718  478234 ubuntu.go:206] setting minikube options for container-runtime
	I1003 19:38:07.008988  478234 config.go:182] Loaded profile config "no-preload-643397": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 19:38:07.009166  478234 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-643397
	I1003 19:38:07.029147  478234 main.go:141] libmachine: Using SSH client type: native
	I1003 19:38:07.029463  478234 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1003 19:38:07.029484  478234 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1003 19:38:07.397011  478234 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1003 19:38:07.397035  478234 machine.go:96] duration metric: took 5.773410963s to provisionDockerMachine
	I1003 19:38:07.397046  478234 start.go:293] postStartSetup for "no-preload-643397" (driver="docker")
	I1003 19:38:07.397056  478234 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 19:38:07.397125  478234 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 19:38:07.397177  478234 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-643397
	I1003 19:38:07.423895  478234 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/no-preload-643397/id_rsa Username:docker}
	I1003 19:38:07.530288  478234 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 19:38:07.533974  478234 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1003 19:38:07.534003  478234 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1003 19:38:07.534014  478234 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-284583/.minikube/addons for local assets ...
	I1003 19:38:07.534074  478234 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-284583/.minikube/files for local assets ...
	I1003 19:38:07.534160  478234 filesync.go:149] local asset: /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem -> 2864342.pem in /etc/ssl/certs
	I1003 19:38:07.534275  478234 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 19:38:07.545922  478234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem --> /etc/ssl/certs/2864342.pem (1708 bytes)
	I1003 19:38:07.570413  478234 start.go:296] duration metric: took 173.352271ms for postStartSetup
	I1003 19:38:07.570493  478234 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 19:38:07.570538  478234 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-643397
	I1003 19:38:07.592176  478234 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/no-preload-643397/id_rsa Username:docker}
	I1003 19:38:07.690430  478234 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1003 19:38:07.695506  478234 fix.go:56] duration metric: took 6.414060463s for fixHost
	I1003 19:38:07.695530  478234 start.go:83] releasing machines lock for "no-preload-643397", held for 6.41410766s
	I1003 19:38:07.695595  478234 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-643397
	I1003 19:38:07.740549  478234 ssh_runner.go:195] Run: cat /version.json
	I1003 19:38:07.740606  478234 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 19:38:07.740615  478234 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-643397
	I1003 19:38:07.740657  478234 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-643397
	I1003 19:38:07.769128  478234 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/no-preload-643397/id_rsa Username:docker}
	I1003 19:38:07.773388  478234 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/no-preload-643397/id_rsa Username:docker}
	I1003 19:38:07.964434  478234 ssh_runner.go:195] Run: systemctl --version
	I1003 19:38:07.970907  478234 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1003 19:38:08.047920  478234 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1003 19:38:08.052456  478234 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 19:38:08.052532  478234 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 19:38:08.063232  478234 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1003 19:38:08.063250  478234 start.go:495] detecting cgroup driver to use...
	I1003 19:38:08.063281  478234 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1003 19:38:08.063330  478234 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 19:38:08.080591  478234 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 19:38:08.101806  478234 docker.go:218] disabling cri-docker service (if available) ...
	I1003 19:38:08.101866  478234 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1003 19:38:08.118335  478234 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1003 19:38:08.132940  478234 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1003 19:38:08.278117  478234 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1003 19:38:08.427254  478234 docker.go:234] disabling docker service ...
	I1003 19:38:08.427318  478234 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1003 19:38:08.443765  478234 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1003 19:38:08.458720  478234 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1003 19:38:08.621252  478234 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1003 19:38:08.791515  478234 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1003 19:38:08.805504  478234 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 19:38:08.819922  478234 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1003 19:38:08.819999  478234 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:38:08.828890  478234 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1003 19:38:08.829017  478234 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:38:08.839433  478234 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:38:08.847837  478234 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:38:08.857547  478234 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 19:38:08.866517  478234 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:38:08.875614  478234 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:38:08.884413  478234 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:38:08.893731  478234 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 19:38:08.902311  478234 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 19:38:08.910409  478234 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 19:38:09.066289  478234 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1003 19:38:09.247110  478234 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1003 19:38:09.247180  478234 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1003 19:38:09.259748  478234 start.go:563] Will wait 60s for crictl version
	I1003 19:38:09.259824  478234 ssh_runner.go:195] Run: which crictl
	I1003 19:38:09.263822  478234 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1003 19:38:09.331819  478234 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1003 19:38:09.331909  478234 ssh_runner.go:195] Run: crio --version
	I1003 19:38:09.409159  478234 ssh_runner.go:195] Run: crio --version
	I1003 19:38:09.466939  478234 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1003 19:38:09.469793  478234 cli_runner.go:164] Run: docker network inspect no-preload-643397 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 19:38:09.494455  478234 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1003 19:38:09.498827  478234 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 19:38:09.511059  478234 kubeadm.go:883] updating cluster {Name:no-preload-643397 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-643397 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1003 19:38:09.511171  478234 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 19:38:09.511234  478234 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 19:38:09.563532  478234 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 19:38:09.563558  478234 cache_images.go:85] Images are preloaded, skipping loading
	I1003 19:38:09.563566  478234 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1003 19:38:09.563670  478234 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-643397 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-643397 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1003 19:38:09.563764  478234 ssh_runner.go:195] Run: crio config
	I1003 19:38:09.658953  478234 cni.go:84] Creating CNI manager for ""
	I1003 19:38:09.658978  478234 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 19:38:09.658993  478234 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1003 19:38:09.659016  478234 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-643397 NodeName:no-preload-643397 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1003 19:38:09.659143  478234 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-643397"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 19:38:09.659218  478234 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1003 19:38:09.675280  478234 binaries.go:44] Found k8s binaries, skipping transfer
	I1003 19:38:09.675351  478234 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1003 19:38:09.683810  478234 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1003 19:38:09.704660  478234 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 19:38:09.720023  478234 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1003 19:38:09.736587  478234 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1003 19:38:09.740497  478234 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 19:38:09.750216  478234 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 19:38:09.894857  478234 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 19:38:09.915168  478234 certs.go:69] Setting up /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397 for IP: 192.168.76.2
	I1003 19:38:09.915189  478234 certs.go:195] generating shared ca certs ...
	I1003 19:38:09.915205  478234 certs.go:227] acquiring lock for ca certs: {Name:mk5a10e6c921326e9c211447576eaeb893259ba7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:38:09.915341  478234 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21625-284583/.minikube/ca.key
	I1003 19:38:09.915393  478234 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.key
	I1003 19:38:09.915405  478234 certs.go:257] generating profile certs ...
	I1003 19:38:09.915491  478234 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/client.key
	I1003 19:38:09.915550  478234 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/apiserver.key.ee2e84a9
	I1003 19:38:09.915599  478234 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/proxy-client.key
	I1003 19:38:09.915716  478234 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/286434.pem (1338 bytes)
	W1003 19:38:09.915751  478234 certs.go:480] ignoring /home/jenkins/minikube-integration/21625-284583/.minikube/certs/286434_empty.pem, impossibly tiny 0 bytes
	I1003 19:38:09.915763  478234 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 19:38:09.915801  478234 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem (1082 bytes)
	I1003 19:38:09.915829  478234 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem (1123 bytes)
	I1003 19:38:09.915854  478234 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/key.pem (1675 bytes)
	I1003 19:38:09.915898  478234 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem (1708 bytes)
	I1003 19:38:09.916552  478234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 19:38:09.943838  478234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1003 19:38:09.992759  478234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 19:38:10.030566  478234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1003 19:38:10.115863  478234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1003 19:38:10.186533  478234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1003 19:38:10.274648  478234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 19:38:10.294439  478234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1003 19:38:10.314600  478234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 19:38:10.334604  478234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/certs/286434.pem --> /usr/share/ca-certificates/286434.pem (1338 bytes)
	I1003 19:38:10.354398  478234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem --> /usr/share/ca-certificates/2864342.pem (1708 bytes)
	I1003 19:38:10.383009  478234 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 19:38:10.397488  478234 ssh_runner.go:195] Run: openssl version
	I1003 19:38:10.403981  478234 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2864342.pem && ln -fs /usr/share/ca-certificates/2864342.pem /etc/ssl/certs/2864342.pem"
	I1003 19:38:10.413737  478234 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2864342.pem
	I1003 19:38:10.418313  478234 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  3 18:34 /usr/share/ca-certificates/2864342.pem
	I1003 19:38:10.418439  478234 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2864342.pem
	I1003 19:38:10.462044  478234 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2864342.pem /etc/ssl/certs/3ec20f2e.0"
	I1003 19:38:10.470805  478234 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 19:38:10.479786  478234 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 19:38:10.484026  478234 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  3 18:27 /usr/share/ca-certificates/minikubeCA.pem
	I1003 19:38:10.484142  478234 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 19:38:10.527277  478234 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 19:38:10.536242  478234 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/286434.pem && ln -fs /usr/share/ca-certificates/286434.pem /etc/ssl/certs/286434.pem"
	I1003 19:38:10.547112  478234 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/286434.pem
	I1003 19:38:10.551749  478234 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  3 18:34 /usr/share/ca-certificates/286434.pem
	I1003 19:38:10.551874  478234 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/286434.pem
	I1003 19:38:10.596183  478234 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/286434.pem /etc/ssl/certs/51391683.0"
	I1003 19:38:10.605163  478234 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 19:38:10.609910  478234 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1003 19:38:10.653288  478234 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1003 19:38:10.727526  478234 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1003 19:38:10.827622  478234 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1003 19:38:10.916253  478234 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1003 19:38:11.002404  478234 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1003 19:38:11.091051  478234 kubeadm.go:400] StartCluster: {Name:no-preload-643397 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-643397 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 19:38:11.091205  478234 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1003 19:38:11.091314  478234 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1003 19:38:11.331777  478234 cri.go:89] found id: "b652fe32e2a41b7f6685f05ea15d89051280d1a714c5ade044ee7267681f63c0"
	I1003 19:38:11.331811  478234 cri.go:89] found id: "812c215ff131175f339b6cce18e2749be199f4a5f61868272c2e91503fb4ccb8"
	I1003 19:38:11.331817  478234 cri.go:89] found id: "50b207c92dde75b009a0a2439f4af8008c52855e0ddbc54dcf57ab3bd1972302"
	I1003 19:38:11.331821  478234 cri.go:89] found id: "c2a31dbd1b598431e3e46d051690749feb66f319d34b0915aae14a51b8c1b0e2"
	I1003 19:38:11.331824  478234 cri.go:89] found id: ""
	I1003 19:38:11.331874  478234 ssh_runner.go:195] Run: sudo runc list -f json
	W1003 19:38:11.378031  478234 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-03T19:38:11Z" level=error msg="open /run/runc: no such file or directory"
	I1003 19:38:11.378180  478234 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 19:38:11.407248  478234 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1003 19:38:11.407313  478234 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1003 19:38:11.407395  478234 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1003 19:38:11.422166  478234 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1003 19:38:11.422665  478234 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-643397" does not appear in /home/jenkins/minikube-integration/21625-284583/kubeconfig
	I1003 19:38:11.422834  478234 kubeconfig.go:62] /home/jenkins/minikube-integration/21625-284583/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-643397" cluster setting kubeconfig missing "no-preload-643397" context setting]
	I1003 19:38:11.423198  478234 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/kubeconfig: {Name:mkc1323fd87f4a78231a26d2dab0dff7feecf1e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:38:11.424773  478234 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1003 19:38:11.457327  478234 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1003 19:38:11.457362  478234 kubeadm.go:601] duration metric: took 50.030971ms to restartPrimaryControlPlane
	I1003 19:38:11.457371  478234 kubeadm.go:402] duration metric: took 366.341282ms to StartCluster
	I1003 19:38:11.457387  478234 settings.go:142] acquiring lock: {Name:mkc95577dbc448e3409dfa2b5e53a3a1327cb451 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:38:11.457452  478234 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21625-284583/kubeconfig
	I1003 19:38:11.458029  478234 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/kubeconfig: {Name:mkc1323fd87f4a78231a26d2dab0dff7feecf1e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:38:11.458229  478234 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1003 19:38:11.458565  478234 config.go:182] Loaded profile config "no-preload-643397": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 19:38:11.458626  478234 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1003 19:38:11.458696  478234 addons.go:69] Setting storage-provisioner=true in profile "no-preload-643397"
	I1003 19:38:11.458715  478234 addons.go:238] Setting addon storage-provisioner=true in "no-preload-643397"
	W1003 19:38:11.458722  478234 addons.go:247] addon storage-provisioner should already be in state true
	I1003 19:38:11.458748  478234 host.go:66] Checking if "no-preload-643397" exists ...
	I1003 19:38:11.459365  478234 cli_runner.go:164] Run: docker container inspect no-preload-643397 --format={{.State.Status}}
	I1003 19:38:11.459653  478234 addons.go:69] Setting dashboard=true in profile "no-preload-643397"
	I1003 19:38:11.459674  478234 addons.go:238] Setting addon dashboard=true in "no-preload-643397"
	W1003 19:38:11.459681  478234 addons.go:247] addon dashboard should already be in state true
	I1003 19:38:11.459703  478234 host.go:66] Checking if "no-preload-643397" exists ...
	I1003 19:38:11.460109  478234 cli_runner.go:164] Run: docker container inspect no-preload-643397 --format={{.State.Status}}
	I1003 19:38:11.462467  478234 addons.go:69] Setting default-storageclass=true in profile "no-preload-643397"
	I1003 19:38:11.462491  478234 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-643397"
	I1003 19:38:11.462764  478234 cli_runner.go:164] Run: docker container inspect no-preload-643397 --format={{.State.Status}}
	I1003 19:38:11.464838  478234 out.go:179] * Verifying Kubernetes components...
	I1003 19:38:11.468034  478234 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 19:38:11.531035  478234 addons.go:238] Setting addon default-storageclass=true in "no-preload-643397"
	W1003 19:38:11.531058  478234 addons.go:247] addon default-storageclass should already be in state true
	I1003 19:38:11.531083  478234 host.go:66] Checking if "no-preload-643397" exists ...
	I1003 19:38:11.531493  478234 cli_runner.go:164] Run: docker container inspect no-preload-643397 --format={{.State.Status}}
	I1003 19:38:11.538767  478234 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 19:38:11.538827  478234 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1003 19:38:11.541838  478234 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1003 19:38:08.062224  477208 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/embed-certs-327416/client.crt ...
	I1003 19:38:08.062262  477208 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/embed-certs-327416/client.crt: {Name:mkd12e089d2efdef91909060ee8b687b378a7c79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:38:08.062454  477208 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/embed-certs-327416/client.key ...
	I1003 19:38:08.062470  477208 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/embed-certs-327416/client.key: {Name:mkdf04b1a2c3641454003eae37f6bb4de7cadf06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:38:08.062568  477208 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/embed-certs-327416/apiserver.key.00090923
	I1003 19:38:08.062588  477208 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/embed-certs-327416/apiserver.crt.00090923 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1003 19:38:09.851041  477208 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/embed-certs-327416/apiserver.crt.00090923 ...
	I1003 19:38:09.851081  477208 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/embed-certs-327416/apiserver.crt.00090923: {Name:mk677df1e84177a76aedc7865cd935dc39fc022a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:38:09.851266  477208 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/embed-certs-327416/apiserver.key.00090923 ...
	I1003 19:38:09.851294  477208 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/embed-certs-327416/apiserver.key.00090923: {Name:mkc0e7f828a59dbd78a39b955a29702e00cca82f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:38:09.851378  477208 certs.go:382] copying /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/embed-certs-327416/apiserver.crt.00090923 -> /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/embed-certs-327416/apiserver.crt
	I1003 19:38:09.851473  477208 certs.go:386] copying /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/embed-certs-327416/apiserver.key.00090923 -> /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/embed-certs-327416/apiserver.key
	I1003 19:38:09.851539  477208 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/embed-certs-327416/proxy-client.key
	I1003 19:38:09.851566  477208 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/embed-certs-327416/proxy-client.crt with IP's: []
	I1003 19:38:11.446997  477208 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/embed-certs-327416/proxy-client.crt ...
	I1003 19:38:11.447034  477208 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/embed-certs-327416/proxy-client.crt: {Name:mkc50501e8a07e47ddb1c2b07b860d6b459421fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:38:11.447213  477208 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/embed-certs-327416/proxy-client.key ...
	I1003 19:38:11.447231  477208 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/embed-certs-327416/proxy-client.key: {Name:mka4b48c0876e5acf71c0acf3306176930b77b49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:38:11.447411  477208 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/286434.pem (1338 bytes)
	W1003 19:38:11.447459  477208 certs.go:480] ignoring /home/jenkins/minikube-integration/21625-284583/.minikube/certs/286434_empty.pem, impossibly tiny 0 bytes
	I1003 19:38:11.447475  477208 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 19:38:11.447503  477208 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem (1082 bytes)
	I1003 19:38:11.447529  477208 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem (1123 bytes)
	I1003 19:38:11.447558  477208 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/key.pem (1675 bytes)
	I1003 19:38:11.447604  477208 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem (1708 bytes)
	I1003 19:38:11.448244  477208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 19:38:11.507863  477208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1003 19:38:11.568933  477208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 19:38:11.599437  477208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1003 19:38:11.657275  477208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/embed-certs-327416/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1003 19:38:11.678841  477208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/embed-certs-327416/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1003 19:38:11.700063  477208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/embed-certs-327416/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 19:38:11.720665  477208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/embed-certs-327416/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1003 19:38:11.742315  477208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem --> /usr/share/ca-certificates/2864342.pem (1708 bytes)
	I1003 19:38:11.764091  477208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 19:38:11.783530  477208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/certs/286434.pem --> /usr/share/ca-certificates/286434.pem (1338 bytes)
	I1003 19:38:11.825201  477208 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 19:38:11.876264  477208 ssh_runner.go:195] Run: openssl version
	I1003 19:38:11.891256  477208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 19:38:11.909330  477208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 19:38:11.919248  477208 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  3 18:27 /usr/share/ca-certificates/minikubeCA.pem
	I1003 19:38:11.919318  477208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 19:38:11.993610  477208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 19:38:12.002023  477208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/286434.pem && ln -fs /usr/share/ca-certificates/286434.pem /etc/ssl/certs/286434.pem"
	I1003 19:38:12.017848  477208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/286434.pem
	I1003 19:38:12.022773  477208 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  3 18:34 /usr/share/ca-certificates/286434.pem
	I1003 19:38:12.022843  477208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/286434.pem
	I1003 19:38:12.085429  477208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/286434.pem /etc/ssl/certs/51391683.0"
	I1003 19:38:12.097972  477208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2864342.pem && ln -fs /usr/share/ca-certificates/2864342.pem /etc/ssl/certs/2864342.pem"
	I1003 19:38:12.107148  477208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2864342.pem
	I1003 19:38:12.112963  477208 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  3 18:34 /usr/share/ca-certificates/2864342.pem
	I1003 19:38:12.113052  477208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2864342.pem
	I1003 19:38:12.173609  477208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2864342.pem /etc/ssl/certs/3ec20f2e.0"
	I1003 19:38:12.182219  477208 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 19:38:12.189608  477208 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1003 19:38:12.189698  477208 kubeadm.go:400] StartCluster: {Name:embed-certs-327416 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-327416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 19:38:12.189802  477208 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1003 19:38:12.189881  477208 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1003 19:38:12.242839  477208 cri.go:89] found id: ""
	I1003 19:38:12.242994  477208 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 19:38:12.253587  477208 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1003 19:38:12.262634  477208 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1003 19:38:12.262743  477208 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 19:38:12.275565  477208 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1003 19:38:12.275634  477208 kubeadm.go:157] found existing configuration files:
	
	I1003 19:38:12.275723  477208 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1003 19:38:12.286255  477208 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1003 19:38:12.286316  477208 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1003 19:38:12.293718  477208 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1003 19:38:12.302876  477208 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1003 19:38:12.302936  477208 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1003 19:38:12.315426  477208 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1003 19:38:12.326127  477208 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1003 19:38:12.326188  477208 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1003 19:38:12.335351  477208 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1003 19:38:12.346200  477208 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1003 19:38:12.346319  477208 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1003 19:38:12.354824  477208 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1003 19:38:12.423564  477208 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1003 19:38:12.423969  477208 kubeadm.go:318] [preflight] Running pre-flight checks
	I1003 19:38:12.453207  477208 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1003 19:38:12.453282  477208 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1003 19:38:12.453324  477208 kubeadm.go:318] OS: Linux
	I1003 19:38:12.453376  477208 kubeadm.go:318] CGROUPS_CPU: enabled
	I1003 19:38:12.453429  477208 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1003 19:38:12.453479  477208 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1003 19:38:12.453531  477208 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1003 19:38:12.453581  477208 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1003 19:38:12.453635  477208 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1003 19:38:12.453684  477208 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1003 19:38:12.453735  477208 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1003 19:38:12.453784  477208 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1003 19:38:12.554720  477208 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1003 19:38:12.554838  477208 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1003 19:38:12.554939  477208 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1003 19:38:12.600211  477208 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1003 19:38:12.606664  477208 out.go:252]   - Generating certificates and keys ...
	I1003 19:38:12.606769  477208 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1003 19:38:12.606842  477208 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1003 19:38:11.541941  478234 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 19:38:11.541951  478234 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1003 19:38:11.542009  478234 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-643397
	I1003 19:38:11.544897  478234 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1003 19:38:11.544923  478234 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1003 19:38:11.544995  478234 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-643397
	I1003 19:38:11.584844  478234 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1003 19:38:11.584868  478234 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1003 19:38:11.584935  478234 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-643397
	I1003 19:38:11.618868  478234 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/no-preload-643397/id_rsa Username:docker}
	I1003 19:38:11.629897  478234 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/no-preload-643397/id_rsa Username:docker}
	I1003 19:38:11.644318  478234 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/no-preload-643397/id_rsa Username:docker}
	I1003 19:38:11.953959  478234 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 19:38:11.977870  478234 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 19:38:12.062590  478234 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1003 19:38:12.112299  478234 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1003 19:38:12.112366  478234 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1003 19:38:12.213329  478234 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1003 19:38:12.213352  478234 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1003 19:38:12.250778  478234 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1003 19:38:12.250798  478234 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1003 19:38:12.391240  478234 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1003 19:38:12.391264  478234 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1003 19:38:12.494554  478234 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1003 19:38:12.494581  478234 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1003 19:38:12.594705  478234 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1003 19:38:12.594730  478234 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1003 19:38:12.629759  478234 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1003 19:38:12.629784  478234 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1003 19:38:12.662013  478234 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1003 19:38:12.662038  478234 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1003 19:38:12.701796  478234 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1003 19:38:12.701821  478234 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1003 19:38:12.731331  478234 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1003 19:38:13.169131  477208 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1003 19:38:14.743707  477208 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1003 19:38:16.420742  477208 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1003 19:38:16.956189  477208 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1003 19:38:17.427282  477208 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1003 19:38:17.427696  477208 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [embed-certs-327416 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1003 19:38:17.699099  477208 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1003 19:38:17.699510  477208 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-327416 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1003 19:38:17.793293  477208 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1003 19:38:18.014838  477208 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1003 19:38:19.234626  477208 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1003 19:38:19.237162  477208 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1003 19:38:19.634461  477208 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1003 19:38:20.071979  477208 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1003 19:38:20.416361  477208 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1003 19:38:20.996135  477208 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1003 19:38:22.275457  477208 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1003 19:38:22.277341  477208 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1003 19:38:22.280123  477208 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1003 19:38:22.809805  478234 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.855802048s)
	I1003 19:38:22.809867  478234 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (10.831972537s)
	I1003 19:38:22.809894  478234 node_ready.go:35] waiting up to 6m0s for node "no-preload-643397" to be "Ready" ...
	I1003 19:38:22.810206  478234 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (10.747588705s)
	I1003 19:38:22.810479  478234 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (10.079115654s)
	I1003 19:38:22.813922  478234 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-643397 addons enable metrics-server
	
	I1003 19:38:22.830998  478234 node_ready.go:49] node "no-preload-643397" is "Ready"
	I1003 19:38:22.831030  478234 node_ready.go:38] duration metric: took 21.113942ms for node "no-preload-643397" to be "Ready" ...
	I1003 19:38:22.831045  478234 api_server.go:52] waiting for apiserver process to appear ...
	I1003 19:38:22.831101  478234 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 19:38:22.853745  478234 api_server.go:72] duration metric: took 11.395480975s to wait for apiserver process to appear ...
	I1003 19:38:22.853773  478234 api_server.go:88] waiting for apiserver healthz status ...
	I1003 19:38:22.853795  478234 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1003 19:38:22.858137  478234 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1003 19:38:22.283549  477208 out.go:252]   - Booting up control plane ...
	I1003 19:38:22.283661  477208 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1003 19:38:22.283894  477208 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1003 19:38:22.294095  477208 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1003 19:38:22.312380  477208 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1003 19:38:22.312824  477208 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1003 19:38:22.323831  477208 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1003 19:38:22.324492  477208 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1003 19:38:22.324791  477208 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1003 19:38:22.531087  477208 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1003 19:38:22.531212  477208 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1003 19:38:22.861013  478234 addons.go:514] duration metric: took 11.402374026s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1003 19:38:22.864909  478234 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1003 19:38:22.866247  478234 api_server.go:141] control plane version: v1.34.1
	I1003 19:38:22.866276  478234 api_server.go:131] duration metric: took 12.496164ms to wait for apiserver health ...
	I1003 19:38:22.866286  478234 system_pods.go:43] waiting for kube-system pods to appear ...
	I1003 19:38:22.873325  478234 system_pods.go:59] 8 kube-system pods found
	I1003 19:38:22.873366  478234 system_pods.go:61] "coredns-66bc5c9577-h8n5p" [d7f4ec9d-9c68-4332-b6c7-e52f424dcd1e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1003 19:38:22.873404  478234 system_pods.go:61] "etcd-no-preload-643397" [642f5548-1caf-4bb4-9780-63e00e8b0a3c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1003 19:38:22.873419  478234 system_pods.go:61] "kindnet-7zwct" [bd0ecfeb-3764-425f-b7ae-e6f5b3e161d8] Running
	I1003 19:38:22.873430  478234 system_pods.go:61] "kube-apiserver-no-preload-643397" [6e4aa6fd-218d-45ce-a0d9-a1736936d2d3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1003 19:38:22.873441  478234 system_pods.go:61] "kube-controller-manager-no-preload-643397" [29843b74-a1d2-46af-ac5e-06f4d53a0ac4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1003 19:38:22.873446  478234 system_pods.go:61] "kube-proxy-lcs2q" [f25c0891-1202-477f-9cc9-5e41c3f1b9fb] Running
	I1003 19:38:22.873473  478234 system_pods.go:61] "kube-scheduler-no-preload-643397" [6865d4a0-3590-465e-81e1-927d271170c0] Running
	I1003 19:38:22.873484  478234 system_pods.go:61] "storage-provisioner" [355c16e4-3158-4ffc-9379-57747ed71cca] Running
	I1003 19:38:22.873492  478234 system_pods.go:74] duration metric: took 7.198254ms to wait for pod list to return data ...
	I1003 19:38:22.873505  478234 default_sa.go:34] waiting for default service account to be created ...
	I1003 19:38:22.880388  478234 default_sa.go:45] found service account: "default"
	I1003 19:38:22.880424  478234 default_sa.go:55] duration metric: took 6.911686ms for default service account to be created ...
	I1003 19:38:22.880451  478234 system_pods.go:116] waiting for k8s-apps to be running ...
	I1003 19:38:22.891458  478234 system_pods.go:86] 8 kube-system pods found
	I1003 19:38:22.891499  478234 system_pods.go:89] "coredns-66bc5c9577-h8n5p" [d7f4ec9d-9c68-4332-b6c7-e52f424dcd1e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1003 19:38:22.891529  478234 system_pods.go:89] "etcd-no-preload-643397" [642f5548-1caf-4bb4-9780-63e00e8b0a3c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1003 19:38:22.891545  478234 system_pods.go:89] "kindnet-7zwct" [bd0ecfeb-3764-425f-b7ae-e6f5b3e161d8] Running
	I1003 19:38:22.891553  478234 system_pods.go:89] "kube-apiserver-no-preload-643397" [6e4aa6fd-218d-45ce-a0d9-a1736936d2d3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1003 19:38:22.891581  478234 system_pods.go:89] "kube-controller-manager-no-preload-643397" [29843b74-a1d2-46af-ac5e-06f4d53a0ac4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1003 19:38:22.891598  478234 system_pods.go:89] "kube-proxy-lcs2q" [f25c0891-1202-477f-9cc9-5e41c3f1b9fb] Running
	I1003 19:38:22.891611  478234 system_pods.go:89] "kube-scheduler-no-preload-643397" [6865d4a0-3590-465e-81e1-927d271170c0] Running
	I1003 19:38:22.891616  478234 system_pods.go:89] "storage-provisioner" [355c16e4-3158-4ffc-9379-57747ed71cca] Running
	I1003 19:38:22.891624  478234 system_pods.go:126] duration metric: took 11.160849ms to wait for k8s-apps to be running ...
	I1003 19:38:22.891651  478234 system_svc.go:44] waiting for kubelet service to be running ....
	I1003 19:38:22.891723  478234 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 19:38:22.904566  478234 system_svc.go:56] duration metric: took 12.907205ms WaitForService to wait for kubelet
	I1003 19:38:22.904635  478234 kubeadm.go:586] duration metric: took 11.446373696s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 19:38:22.904670  478234 node_conditions.go:102] verifying NodePressure condition ...
	I1003 19:38:22.907835  478234 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1003 19:38:22.907908  478234 node_conditions.go:123] node cpu capacity is 2
	I1003 19:38:22.907935  478234 node_conditions.go:105] duration metric: took 3.244684ms to run NodePressure ...
	I1003 19:38:22.907960  478234 start.go:241] waiting for startup goroutines ...
	I1003 19:38:22.907994  478234 start.go:246] waiting for cluster config update ...
	I1003 19:38:22.908024  478234 start.go:255] writing updated cluster config ...
	I1003 19:38:22.908334  478234 ssh_runner.go:195] Run: rm -f paused
	I1003 19:38:22.913846  478234 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1003 19:38:22.918761  478234 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-h8n5p" in "kube-system" namespace to be "Ready" or be gone ...
	W1003 19:38:24.925810  478234 pod_ready.go:104] pod "coredns-66bc5c9577-h8n5p" is not "Ready", error: <nil>
	I1003 19:38:23.533159  477208 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.002026815s
	I1003 19:38:23.536778  477208 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1003 19:38:23.536878  477208 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1003 19:38:23.537112  477208 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1003 19:38:23.537203  477208 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1003 19:38:26.701026  477208 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.163382872s
	W1003 19:38:26.929323  478234 pod_ready.go:104] pod "coredns-66bc5c9577-h8n5p" is not "Ready", error: <nil>
	W1003 19:38:29.428010  478234 pod_ready.go:104] pod "coredns-66bc5c9577-h8n5p" is not "Ready", error: <nil>
	I1003 19:38:31.039510  477208 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 7.502107269s
	I1003 19:38:31.432694  477208 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 7.893676401s
	I1003 19:38:31.462451  477208 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1003 19:38:31.485768  477208 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1003 19:38:31.515781  477208 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1003 19:38:31.516010  477208 kubeadm.go:318] [mark-control-plane] Marking the node embed-certs-327416 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1003 19:38:31.539554  477208 kubeadm.go:318] [bootstrap-token] Using token: 5yu88r.ez5e2j3x2s20vqjm
	I1003 19:38:31.542613  477208 out.go:252]   - Configuring RBAC rules ...
	I1003 19:38:31.542745  477208 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1003 19:38:31.552466  477208 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1003 19:38:31.574884  477208 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1003 19:38:31.582994  477208 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1003 19:38:31.589350  477208 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1003 19:38:31.600254  477208 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1003 19:38:31.838735  477208 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1003 19:38:32.297926  477208 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1003 19:38:32.839629  477208 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1003 19:38:32.840769  477208 kubeadm.go:318] 
	I1003 19:38:32.840857  477208 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1003 19:38:32.840867  477208 kubeadm.go:318] 
	I1003 19:38:32.840948  477208 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1003 19:38:32.840958  477208 kubeadm.go:318] 
	I1003 19:38:32.841010  477208 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1003 19:38:32.841087  477208 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1003 19:38:32.841142  477208 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1003 19:38:32.841146  477208 kubeadm.go:318] 
	I1003 19:38:32.841211  477208 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1003 19:38:32.841218  477208 kubeadm.go:318] 
	I1003 19:38:32.841268  477208 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1003 19:38:32.841279  477208 kubeadm.go:318] 
	I1003 19:38:32.841333  477208 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1003 19:38:32.841412  477208 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1003 19:38:32.841483  477208 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1003 19:38:32.841488  477208 kubeadm.go:318] 
	I1003 19:38:32.841576  477208 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1003 19:38:32.841656  477208 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1003 19:38:32.841668  477208 kubeadm.go:318] 
	I1003 19:38:32.841756  477208 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 5yu88r.ez5e2j3x2s20vqjm \
	I1003 19:38:32.841864  477208 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:f66ff31263aa4cda6b17caa2076838d6a1918275f1c2773b90b119c0d4a4d71a \
	I1003 19:38:32.841885  477208 kubeadm.go:318] 	--control-plane 
	I1003 19:38:32.841890  477208 kubeadm.go:318] 
	I1003 19:38:32.841983  477208 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1003 19:38:32.841988  477208 kubeadm.go:318] 
	I1003 19:38:32.842073  477208 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 5yu88r.ez5e2j3x2s20vqjm \
	I1003 19:38:32.842179  477208 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:f66ff31263aa4cda6b17caa2076838d6a1918275f1c2773b90b119c0d4a4d71a 
	I1003 19:38:32.845828  477208 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1003 19:38:32.846070  477208 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1003 19:38:32.846186  477208 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1003 19:38:32.846196  477208 cni.go:84] Creating CNI manager for ""
	I1003 19:38:32.846203  477208 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 19:38:32.849429  477208 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1003 19:38:32.852320  477208 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1003 19:38:32.856812  477208 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1003 19:38:32.856835  477208 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1003 19:38:32.872270  477208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	W1003 19:38:31.928210  478234 pod_ready.go:104] pod "coredns-66bc5c9577-h8n5p" is not "Ready", error: <nil>
	W1003 19:38:34.424148  478234 pod_ready.go:104] pod "coredns-66bc5c9577-h8n5p" is not "Ready", error: <nil>
	I1003 19:38:33.232410  477208 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1003 19:38:33.232573  477208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 19:38:33.232662  477208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-327416 minikube.k8s.io/updated_at=2025_10_03T19_38_33_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=a43873c79fc22f8b1ccd29d3dfa635d392b09335 minikube.k8s.io/name=embed-certs-327416 minikube.k8s.io/primary=true
	I1003 19:38:33.712259  477208 ops.go:34] apiserver oom_adj: -16
	I1003 19:38:33.712370  477208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 19:38:34.212889  477208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 19:38:34.712547  477208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 19:38:35.212573  477208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 19:38:35.713000  477208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 19:38:36.212858  477208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 19:38:36.713373  477208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 19:38:36.865885  477208 kubeadm.go:1113] duration metric: took 3.633359094s to wait for elevateKubeSystemPrivileges
	I1003 19:38:36.865912  477208 kubeadm.go:402] duration metric: took 24.676219021s to StartCluster
	I1003 19:38:36.865929  477208 settings.go:142] acquiring lock: {Name:mkc95577dbc448e3409dfa2b5e53a3a1327cb451 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:38:36.865994  477208 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21625-284583/kubeconfig
	I1003 19:38:36.867630  477208 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/kubeconfig: {Name:mkc1323fd87f4a78231a26d2dab0dff7feecf1e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:38:36.873736  477208 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1003 19:38:36.874512  477208 config.go:182] Loaded profile config "embed-certs-327416": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 19:38:36.874585  477208 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1003 19:38:36.874646  477208 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1003 19:38:36.874818  477208 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-327416"
	I1003 19:38:36.874843  477208 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-327416"
	I1003 19:38:36.874864  477208 host.go:66] Checking if "embed-certs-327416" exists ...
	I1003 19:38:36.875343  477208 cli_runner.go:164] Run: docker container inspect embed-certs-327416 --format={{.State.Status}}
	I1003 19:38:36.878052  477208 out.go:179] * Verifying Kubernetes components...
	I1003 19:38:36.881384  477208 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 19:38:36.882607  477208 addons.go:69] Setting default-storageclass=true in profile "embed-certs-327416"
	I1003 19:38:36.882635  477208 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-327416"
	I1003 19:38:36.882970  477208 cli_runner.go:164] Run: docker container inspect embed-certs-327416 --format={{.State.Status}}
	I1003 19:38:36.918963  477208 addons.go:238] Setting addon default-storageclass=true in "embed-certs-327416"
	I1003 19:38:36.919003  477208 host.go:66] Checking if "embed-certs-327416" exists ...
	I1003 19:38:36.919419  477208 cli_runner.go:164] Run: docker container inspect embed-certs-327416 --format={{.State.Status}}
	I1003 19:38:36.928101  477208 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 19:38:36.933297  477208 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 19:38:36.933321  477208 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1003 19:38:36.933389  477208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-327416
	I1003 19:38:36.968698  477208 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/embed-certs-327416/id_rsa Username:docker}
	I1003 19:38:36.980816  477208 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1003 19:38:36.980838  477208 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1003 19:38:36.980900  477208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-327416
	I1003 19:38:37.006826  477208 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/embed-certs-327416/id_rsa Username:docker}
	I1003 19:38:37.444644  477208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1003 19:38:37.450626  477208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 19:38:37.523452  477208 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 19:38:37.523727  477208 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1003 19:38:39.004422  477208 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.553722295s)
	I1003 19:38:39.004675  477208 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.480901372s)
	I1003 19:38:39.004872  477208 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1003 19:38:39.004845  477208 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.481325558s)
	I1003 19:38:39.006185  477208 node_ready.go:35] waiting up to 6m0s for node "embed-certs-327416" to be "Ready" ...
	I1003 19:38:39.009129  477208 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	W1003 19:38:36.434122  478234 pod_ready.go:104] pod "coredns-66bc5c9577-h8n5p" is not "Ready", error: <nil>
	W1003 19:38:38.437357  478234 pod_ready.go:104] pod "coredns-66bc5c9577-h8n5p" is not "Ready", error: <nil>
	W1003 19:38:40.925089  478234 pod_ready.go:104] pod "coredns-66bc5c9577-h8n5p" is not "Ready", error: <nil>
	I1003 19:38:39.012550  477208 addons.go:514] duration metric: took 2.137895657s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1003 19:38:39.509864  477208 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-327416" context rescaled to 1 replicas
	W1003 19:38:41.010102  477208 node_ready.go:57] node "embed-certs-327416" has "Ready":"False" status (will retry)
	W1003 19:38:43.424174  478234 pod_ready.go:104] pod "coredns-66bc5c9577-h8n5p" is not "Ready", error: <nil>
	W1003 19:38:45.426323  478234 pod_ready.go:104] pod "coredns-66bc5c9577-h8n5p" is not "Ready", error: <nil>
	W1003 19:38:43.012121  477208 node_ready.go:57] node "embed-certs-327416" has "Ready":"False" status (will retry)
	W1003 19:38:45.016030  477208 node_ready.go:57] node "embed-certs-327416" has "Ready":"False" status (will retry)
	W1003 19:38:47.508862  477208 node_ready.go:57] node "embed-certs-327416" has "Ready":"False" status (will retry)
	W1003 19:38:47.923808  478234 pod_ready.go:104] pod "coredns-66bc5c9577-h8n5p" is not "Ready", error: <nil>
	W1003 19:38:49.924330  478234 pod_ready.go:104] pod "coredns-66bc5c9577-h8n5p" is not "Ready", error: <nil>
	W1003 19:38:49.508997  477208 node_ready.go:57] node "embed-certs-327416" has "Ready":"False" status (will retry)
	W1003 19:38:51.510173  477208 node_ready.go:57] node "embed-certs-327416" has "Ready":"False" status (will retry)
	W1003 19:38:52.425011  478234 pod_ready.go:104] pod "coredns-66bc5c9577-h8n5p" is not "Ready", error: <nil>
	W1003 19:38:54.925148  478234 pod_ready.go:104] pod "coredns-66bc5c9577-h8n5p" is not "Ready", error: <nil>
	W1003 19:38:54.010483  477208 node_ready.go:57] node "embed-certs-327416" has "Ready":"False" status (will retry)
	W1003 19:38:56.509685  477208 node_ready.go:57] node "embed-certs-327416" has "Ready":"False" status (will retry)
	W1003 19:38:57.424770  478234 pod_ready.go:104] pod "coredns-66bc5c9577-h8n5p" is not "Ready", error: <nil>
	W1003 19:38:59.425018  478234 pod_ready.go:104] pod "coredns-66bc5c9577-h8n5p" is not "Ready", error: <nil>
	W1003 19:38:59.009804  477208 node_ready.go:57] node "embed-certs-327416" has "Ready":"False" status (will retry)
	W1003 19:39:01.026299  477208 node_ready.go:57] node "embed-certs-327416" has "Ready":"False" status (will retry)
	W1003 19:39:01.925035  478234 pod_ready.go:104] pod "coredns-66bc5c9577-h8n5p" is not "Ready", error: <nil>
	I1003 19:39:02.924328  478234 pod_ready.go:94] pod "coredns-66bc5c9577-h8n5p" is "Ready"
	I1003 19:39:02.924360  478234 pod_ready.go:86] duration metric: took 40.005531941s for pod "coredns-66bc5c9577-h8n5p" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:39:02.926948  478234 pod_ready.go:83] waiting for pod "etcd-no-preload-643397" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:39:02.931787  478234 pod_ready.go:94] pod "etcd-no-preload-643397" is "Ready"
	I1003 19:39:02.931857  478234 pod_ready.go:86] duration metric: took 4.881969ms for pod "etcd-no-preload-643397" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:39:02.934529  478234 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-643397" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:39:02.939172  478234 pod_ready.go:94] pod "kube-apiserver-no-preload-643397" is "Ready"
	I1003 19:39:02.939200  478234 pod_ready.go:86] duration metric: took 4.645937ms for pod "kube-apiserver-no-preload-643397" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:39:02.941614  478234 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-643397" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:39:03.122962  478234 pod_ready.go:94] pod "kube-controller-manager-no-preload-643397" is "Ready"
	I1003 19:39:03.123038  478234 pod_ready.go:86] duration metric: took 181.400022ms for pod "kube-controller-manager-no-preload-643397" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:39:03.323073  478234 pod_ready.go:83] waiting for pod "kube-proxy-lcs2q" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:39:03.723280  478234 pod_ready.go:94] pod "kube-proxy-lcs2q" is "Ready"
	I1003 19:39:03.723310  478234 pod_ready.go:86] duration metric: took 400.211074ms for pod "kube-proxy-lcs2q" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:39:03.922422  478234 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-643397" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:39:04.322850  478234 pod_ready.go:94] pod "kube-scheduler-no-preload-643397" is "Ready"
	I1003 19:39:04.322877  478234 pod_ready.go:86] duration metric: took 400.428154ms for pod "kube-scheduler-no-preload-643397" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:39:04.322890  478234 pod_ready.go:40] duration metric: took 41.408970041s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1003 19:39:04.389109  478234 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1003 19:39:04.392322  478234 out.go:179] * Done! kubectl is now configured to use "no-preload-643397" cluster and "default" namespace by default
	W1003 19:39:03.509050  477208 node_ready.go:57] node "embed-certs-327416" has "Ready":"False" status (will retry)
	W1003 19:39:05.509685  477208 node_ready.go:57] node "embed-certs-327416" has "Ready":"False" status (will retry)
	W1003 19:39:08.012286  477208 node_ready.go:57] node "embed-certs-327416" has "Ready":"False" status (will retry)
	W1003 19:39:10.014610  477208 node_ready.go:57] node "embed-certs-327416" has "Ready":"False" status (will retry)
	W1003 19:39:12.510806  477208 node_ready.go:57] node "embed-certs-327416" has "Ready":"False" status (will retry)
	W1003 19:39:15.012834  477208 node_ready.go:57] node "embed-certs-327416" has "Ready":"False" status (will retry)
	W1003 19:39:17.509950  477208 node_ready.go:57] node "embed-certs-327416" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Oct 03 19:39:01 no-preload-643397 crio[654]: time="2025-10-03T19:39:01.555465742Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 03 19:39:01 no-preload-643397 crio[654]: time="2025-10-03T19:39:01.558649602Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 03 19:39:01 no-preload-643397 crio[654]: time="2025-10-03T19:39:01.558685533Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 03 19:39:01 no-preload-643397 crio[654]: time="2025-10-03T19:39:01.558703281Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 03 19:39:01 no-preload-643397 crio[654]: time="2025-10-03T19:39:01.561772505Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 03 19:39:01 no-preload-643397 crio[654]: time="2025-10-03T19:39:01.561806745Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 03 19:39:01 no-preload-643397 crio[654]: time="2025-10-03T19:39:01.561829252Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 03 19:39:01 no-preload-643397 crio[654]: time="2025-10-03T19:39:01.564956972Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 03 19:39:01 no-preload-643397 crio[654]: time="2025-10-03T19:39:01.564993493Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 03 19:39:01 no-preload-643397 crio[654]: time="2025-10-03T19:39:01.565060858Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 03 19:39:01 no-preload-643397 crio[654]: time="2025-10-03T19:39:01.568236053Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 03 19:39:01 no-preload-643397 crio[654]: time="2025-10-03T19:39:01.568270285Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 03 19:39:11 no-preload-643397 crio[654]: time="2025-10-03T19:39:11.265757322Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=b03489d1-b1b3-48f6-b731-61e1642239eb name=/runtime.v1.ImageService/ImageStatus
	Oct 03 19:39:11 no-preload-643397 crio[654]: time="2025-10-03T19:39:11.266716016Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=58ffe6d8-3ff1-49fa-9d02-8fd99fcebc65 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 19:39:11 no-preload-643397 crio[654]: time="2025-10-03T19:39:11.267708942Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8dq9s/dashboard-metrics-scraper" id=8ad9ccc0-d75f-407f-8d30-6f99eb9d7bc0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:39:11 no-preload-643397 crio[654]: time="2025-10-03T19:39:11.267991449Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 19:39:11 no-preload-643397 crio[654]: time="2025-10-03T19:39:11.274935825Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 19:39:11 no-preload-643397 crio[654]: time="2025-10-03T19:39:11.275919479Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 19:39:11 no-preload-643397 crio[654]: time="2025-10-03T19:39:11.290779069Z" level=info msg="Created container 9e1e9b4fe19a20d0e1d02f1ab66d7f7479fb8f666b2994af5f888db15ff382d4: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8dq9s/dashboard-metrics-scraper" id=8ad9ccc0-d75f-407f-8d30-6f99eb9d7bc0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:39:11 no-preload-643397 crio[654]: time="2025-10-03T19:39:11.29184462Z" level=info msg="Starting container: 9e1e9b4fe19a20d0e1d02f1ab66d7f7479fb8f666b2994af5f888db15ff382d4" id=03c10f19-b1e4-476a-81e1-4bb955c63bf5 name=/runtime.v1.RuntimeService/StartContainer
	Oct 03 19:39:11 no-preload-643397 crio[654]: time="2025-10-03T19:39:11.293559952Z" level=info msg="Started container" PID=1712 containerID=9e1e9b4fe19a20d0e1d02f1ab66d7f7479fb8f666b2994af5f888db15ff382d4 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8dq9s/dashboard-metrics-scraper id=03c10f19-b1e4-476a-81e1-4bb955c63bf5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=fa2c3bf1de5856f8a0ae1764925cf9d85321ea8f1d07f19d8180930c2110e67e
	Oct 03 19:39:11 no-preload-643397 conmon[1710]: conmon 9e1e9b4fe19a20d0e1d0 <ninfo>: container 1712 exited with status 1
	Oct 03 19:39:11 no-preload-643397 crio[654]: time="2025-10-03T19:39:11.610153929Z" level=info msg="Removing container: aa979906c9238234a589dc7f071f0a32b32a63d0ca00c51054df57d182702aa3" id=99a2d696-b0bb-482f-ab5e-87eb9df0436c name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 03 19:39:11 no-preload-643397 crio[654]: time="2025-10-03T19:39:11.617433005Z" level=info msg="Error loading conmon cgroup of container aa979906c9238234a589dc7f071f0a32b32a63d0ca00c51054df57d182702aa3: cgroup deleted" id=99a2d696-b0bb-482f-ab5e-87eb9df0436c name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 03 19:39:11 no-preload-643397 crio[654]: time="2025-10-03T19:39:11.621080281Z" level=info msg="Removed container aa979906c9238234a589dc7f071f0a32b32a63d0ca00c51054df57d182702aa3: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8dq9s/dashboard-metrics-scraper" id=99a2d696-b0bb-482f-ab5e-87eb9df0436c name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	9e1e9b4fe19a2       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           7 seconds ago        Exited              dashboard-metrics-scraper   3                   fa2c3bf1de585       dashboard-metrics-scraper-6ffb444bf9-8dq9s   kubernetes-dashboard
	aa091721e2bf9       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           27 seconds ago       Running             storage-provisioner         2                   8055f22ba63b1       storage-provisioner                          kube-system
	8ed7a25aeb889       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   39 seconds ago       Running             kubernetes-dashboard        0                   eb363cbf331a8       kubernetes-dashboard-855c9754f9-8x6xp        kubernetes-dashboard
	655ef1811e74e       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           57 seconds ago       Running             busybox                     1                   4d3225b78f7c8       busybox                                      default
	08858262c4153       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           58 seconds ago       Running             coredns                     1                   0b079101aaf55       coredns-66bc5c9577-h8n5p                     kube-system
	9a21627a747b3       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           58 seconds ago       Running             kindnet-cni                 1                   38fec71ee5a7c       kindnet-7zwct                                kube-system
	536d418166ee5       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           58 seconds ago       Exited              storage-provisioner         1                   8055f22ba63b1       storage-provisioner                          kube-system
	3758592f491ab       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           58 seconds ago       Running             kube-proxy                  1                   c91d5a3b983bd       kube-proxy-lcs2q                             kube-system
	b652fe32e2a41       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   69691b1f1c219       etcd-no-preload-643397                       kube-system
	812c215ff1311       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   d4318b9958916       kube-controller-manager-no-preload-643397    kube-system
	50b207c92dde7       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   d8a36802f5f7a       kube-apiserver-no-preload-643397             kube-system
	c2a31dbd1b598       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   31dff4458a38a       kube-scheduler-no-preload-643397             kube-system
	
	
	==> coredns [08858262c415390ebd844284cd70070377a032c8c9eb33572a8ede338609d2c5] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35992 - 33621 "HINFO IN 4915121020754239743.973228478016810188. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.025106233s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               no-preload-643397
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-643397
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a43873c79fc22f8b1ccd29d3dfa635d392b09335
	                    minikube.k8s.io/name=no-preload-643397
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_03T19_37_14_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 03 Oct 2025 19:37:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-643397
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 03 Oct 2025 19:39:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 03 Oct 2025 19:38:40 +0000   Fri, 03 Oct 2025 19:37:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 03 Oct 2025 19:38:40 +0000   Fri, 03 Oct 2025 19:37:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 03 Oct 2025 19:38:40 +0000   Fri, 03 Oct 2025 19:37:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 03 Oct 2025 19:38:40 +0000   Fri, 03 Oct 2025 19:37:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-643397
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 1d54560dca6f48f99b1f04666fc49819
	  System UUID:                acffaaf4-a938-4dce-9b53-3c0346f455b4
	  Boot ID:                    3762136e-8bec-4104-a5cb-0b1976f6048e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kube-system                 coredns-66bc5c9577-h8n5p                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m
	  kube-system                 etcd-no-preload-643397                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m5s
	  kube-system                 kindnet-7zwct                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m
	  kube-system                 kube-apiserver-no-preload-643397              250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 kube-controller-manager-no-preload-643397     200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m7s
	  kube-system                 kube-proxy-lcs2q                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-scheduler-no-preload-643397              100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         119s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-8dq9s    0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-8x6xp         0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 118s               kube-proxy       
	  Normal   Starting                 56s                kube-proxy       
	  Normal   Starting                 2m6s               kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m6s               kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     2m5s               kubelet          Node no-preload-643397 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m5s               kubelet          Node no-preload-643397 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  2m5s               kubelet          Node no-preload-643397 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           2m1s               node-controller  Node no-preload-643397 event: Registered Node no-preload-643397 in Controller
	  Normal   NodeReady                106s               kubelet          Node no-preload-643397 status is now: NodeReady
	  Normal   Starting                 69s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 69s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  69s (x8 over 69s)  kubelet          Node no-preload-643397 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    69s (x8 over 69s)  kubelet          Node no-preload-643397 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     69s (x8 over 69s)  kubelet          Node no-preload-643397 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           56s                node-controller  Node no-preload-643397 event: Registered Node no-preload-643397 in Controller
	
	
	==> dmesg <==
	[Oct 3 19:09] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:10] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:11] overlayfs: idmapped layers are currently not supported
	[  +4.287643] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:12] overlayfs: idmapped layers are currently not supported
	[ +24.839009] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:13] overlayfs: idmapped layers are currently not supported
	[ +26.493253] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:15] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:16] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:17] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000010] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Oct 3 19:18] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:20] overlayfs: idmapped layers are currently not supported
	[ +32.018892] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:22] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:24] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:26] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:32] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:34] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:35] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:36] overlayfs: idmapped layers are currently not supported
	[  +4.740983] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:38] overlayfs: idmapped layers are currently not supported
	[ +12.897300] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [b652fe32e2a41b7f6685f05ea15d89051280d1a714c5ade044ee7267681f63c0] <==
	{"level":"warn","ts":"2025-10-03T19:38:16.979307Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:38:17.015988Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:38:17.038894Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:38:17.079467Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:38:17.103269Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:38:17.141518Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:38:17.195293Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:38:17.232756Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:38:17.270944Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:38:17.321464Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:38:17.358723Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:38:17.391269Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:38:17.419006Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:38:17.457074Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:38:17.480053Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:38:17.511853Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:38:17.534699Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:38:17.601549Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:38:17.637937Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:38:17.664039Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:38:17.682687Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:38:17.728062Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:38:17.765235Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:38:17.789239Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:38:17.845301Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38204","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 19:39:19 up  2:21,  0 user,  load average: 3.45, 2.85, 2.20
	Linux no-preload-643397 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9a21627a747b30eb7424912a81297de7e4b519fb2f1252d457725408bd116383] <==
	I1003 19:38:21.238237       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1003 19:38:21.242368       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1003 19:38:21.242507       1 main.go:148] setting mtu 1500 for CNI 
	I1003 19:38:21.242519       1 main.go:178] kindnetd IP family: "ipv4"
	I1003 19:38:21.242533       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-03T19:38:21Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1003 19:38:21.543381       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1003 19:38:21.543408       1 controller.go:381] "Waiting for informer caches to sync"
	I1003 19:38:21.543417       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1003 19:38:21.543704       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1003 19:38:51.543987       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1003 19:38:51.544209       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1003 19:38:51.544296       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1003 19:38:51.544432       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1003 19:38:52.844531       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1003 19:38:52.844563       1 metrics.go:72] Registering metrics
	I1003 19:38:52.844635       1 controller.go:711] "Syncing nftables rules"
	I1003 19:39:01.546223       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1003 19:39:01.546279       1 main.go:301] handling current node
	I1003 19:39:11.550819       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1003 19:39:11.550857       1 main.go:301] handling current node
	
	
	==> kube-apiserver [50b207c92dde75b009a0a2439f4af8008c52855e0ddbc54dcf57ab3bd1972302] <==
	I1003 19:38:19.660067       1 aggregator.go:171] initial CRD sync complete...
	I1003 19:38:19.660092       1 autoregister_controller.go:144] Starting autoregister controller
	I1003 19:38:19.660100       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1003 19:38:19.660107       1 cache.go:39] Caches are synced for autoregister controller
	I1003 19:38:19.012812       1 repairip.go:210] Starting ipallocator-repair-controller
	I1003 19:38:19.660250       1 shared_informer.go:349] "Waiting for caches to sync" controller="ipallocator-repair-controller"
	I1003 19:38:19.660257       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1003 19:38:19.660353       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1003 19:38:19.013032       1 default_servicecidr_controller.go:111] Starting kubernetes-service-cidr-controller
	I1003 19:38:19.661106       1 shared_informer.go:349] "Waiting for caches to sync" controller="kubernetes-service-cidr-controller"
	I1003 19:38:19.698352       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1003 19:38:19.766533       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1003 19:38:19.776207       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1003 19:38:19.776274       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1003 19:38:20.026968       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1003 19:38:20.185192       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1003 19:38:21.840961       1 controller.go:667] quota admission added evaluator for: namespaces
	I1003 19:38:22.028068       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1003 19:38:22.138259       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1003 19:38:22.190156       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1003 19:38:22.379583       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.165.55"}
	I1003 19:38:22.477591       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.221.55"}
	I1003 19:38:23.895251       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1003 19:38:24.297919       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1003 19:38:24.345019       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [812c215ff131175f339b6cce18e2749be199f4a5f61868272c2e91503fb4ccb8] <==
	I1003 19:38:23.892781       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1003 19:38:23.898125       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1003 19:38:23.899047       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1003 19:38:23.899062       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1003 19:38:23.904179       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1003 19:38:23.908478       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1003 19:38:23.910386       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1003 19:38:23.913598       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1003 19:38:23.918823       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1003 19:38:23.919251       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1003 19:38:23.925493       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1003 19:38:23.927836       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1003 19:38:23.934373       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1003 19:38:23.934513       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1003 19:38:23.934634       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-643397"
	I1003 19:38:23.934703       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1003 19:38:23.937832       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1003 19:38:23.937908       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1003 19:38:23.943604       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1003 19:38:23.943666       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1003 19:38:23.943694       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1003 19:38:23.948777       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1003 19:38:23.951096       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1003 19:38:23.953801       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1003 19:38:23.956579       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	
	
	==> kube-proxy [3758592f491ab78c49e621316a06fabe1198eeb6f1be7d8ed8d05bc65d190237] <==
	I1003 19:38:21.790176       1 server_linux.go:53] "Using iptables proxy"
	I1003 19:38:22.093931       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1003 19:38:22.294957       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1003 19:38:22.295286       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1003 19:38:22.295373       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1003 19:38:22.776851       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1003 19:38:22.776922       1 server_linux.go:132] "Using iptables Proxier"
	I1003 19:38:22.846454       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1003 19:38:22.846758       1 server.go:527] "Version info" version="v1.34.1"
	I1003 19:38:22.846774       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1003 19:38:22.855312       1 config.go:106] "Starting endpoint slice config controller"
	I1003 19:38:22.855335       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1003 19:38:22.855640       1 config.go:200] "Starting service config controller"
	I1003 19:38:22.855659       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1003 19:38:22.866500       1 config.go:403] "Starting serviceCIDR config controller"
	I1003 19:38:22.866629       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1003 19:38:22.877432       1 config.go:309] "Starting node config controller"
	I1003 19:38:22.877527       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1003 19:38:22.877561       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1003 19:38:22.955542       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1003 19:38:22.956795       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1003 19:38:22.966712       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [c2a31dbd1b598431e3e46d051690749feb66f319d34b0915aae14a51b8c1b0e2] <==
	I1003 19:38:15.267084       1 serving.go:386] Generated self-signed cert in-memory
	I1003 19:38:20.923240       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1003 19:38:20.923265       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1003 19:38:20.975315       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1003 19:38:20.975414       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1003 19:38:20.975431       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1003 19:38:20.975483       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1003 19:38:21.014081       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1003 19:38:21.014113       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1003 19:38:21.014133       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1003 19:38:21.014140       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1003 19:38:21.138465       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1003 19:38:21.138916       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1003 19:38:21.178020       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Oct 03 19:38:33 no-preload-643397 kubelet[774]: I1003 19:38:33.478417     774 scope.go:117] "RemoveContainer" containerID="a0a594c0ba53d77dd610f887674b8330cdd03b9f36fb8bc5d80d050bc9a9c948"
	Oct 03 19:38:34 no-preload-643397 kubelet[774]: I1003 19:38:34.483412     774 scope.go:117] "RemoveContainer" containerID="a0a594c0ba53d77dd610f887674b8330cdd03b9f36fb8bc5d80d050bc9a9c948"
	Oct 03 19:38:34 no-preload-643397 kubelet[774]: I1003 19:38:34.483807     774 scope.go:117] "RemoveContainer" containerID="8cb2a1d4a7332c64f343d4090306f882560b05ae38075f8fbf622b19b615d75c"
	Oct 03 19:38:34 no-preload-643397 kubelet[774]: E1003 19:38:34.483983     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8dq9s_kubernetes-dashboard(339a73b0-9164-4e99-bfc4-ba69ac8b1fc8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8dq9s" podUID="339a73b0-9164-4e99-bfc4-ba69ac8b1fc8"
	Oct 03 19:38:35 no-preload-643397 kubelet[774]: I1003 19:38:35.507407     774 scope.go:117] "RemoveContainer" containerID="8cb2a1d4a7332c64f343d4090306f882560b05ae38075f8fbf622b19b615d75c"
	Oct 03 19:38:35 no-preload-643397 kubelet[774]: E1003 19:38:35.507577     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8dq9s_kubernetes-dashboard(339a73b0-9164-4e99-bfc4-ba69ac8b1fc8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8dq9s" podUID="339a73b0-9164-4e99-bfc4-ba69ac8b1fc8"
	Oct 03 19:38:36 no-preload-643397 kubelet[774]: I1003 19:38:36.509936     774 scope.go:117] "RemoveContainer" containerID="8cb2a1d4a7332c64f343d4090306f882560b05ae38075f8fbf622b19b615d75c"
	Oct 03 19:38:36 no-preload-643397 kubelet[774]: E1003 19:38:36.510100     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8dq9s_kubernetes-dashboard(339a73b0-9164-4e99-bfc4-ba69ac8b1fc8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8dq9s" podUID="339a73b0-9164-4e99-bfc4-ba69ac8b1fc8"
	Oct 03 19:38:49 no-preload-643397 kubelet[774]: I1003 19:38:49.266022     774 scope.go:117] "RemoveContainer" containerID="8cb2a1d4a7332c64f343d4090306f882560b05ae38075f8fbf622b19b615d75c"
	Oct 03 19:38:49 no-preload-643397 kubelet[774]: I1003 19:38:49.550977     774 scope.go:117] "RemoveContainer" containerID="8cb2a1d4a7332c64f343d4090306f882560b05ae38075f8fbf622b19b615d75c"
	Oct 03 19:38:50 no-preload-643397 kubelet[774]: I1003 19:38:50.554735     774 scope.go:117] "RemoveContainer" containerID="aa979906c9238234a589dc7f071f0a32b32a63d0ca00c51054df57d182702aa3"
	Oct 03 19:38:50 no-preload-643397 kubelet[774]: E1003 19:38:50.554887     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8dq9s_kubernetes-dashboard(339a73b0-9164-4e99-bfc4-ba69ac8b1fc8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8dq9s" podUID="339a73b0-9164-4e99-bfc4-ba69ac8b1fc8"
	Oct 03 19:38:50 no-preload-643397 kubelet[774]: I1003 19:38:50.569055     774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-8x6xp" podStartSLOduration=13.639206441 podStartE2EDuration="26.569038167s" podCreationTimestamp="2025-10-03 19:38:24 +0000 UTC" firstStartedPulling="2025-10-03 19:38:26.661400191 +0000 UTC m=+16.743900990" lastFinishedPulling="2025-10-03 19:38:39.591231917 +0000 UTC m=+29.673732716" observedRunningTime="2025-10-03 19:38:40.544994073 +0000 UTC m=+30.627494880" watchObservedRunningTime="2025-10-03 19:38:50.569038167 +0000 UTC m=+40.651538966"
	Oct 03 19:38:51 no-preload-643397 kubelet[774]: I1003 19:38:51.558502     774 scope.go:117] "RemoveContainer" containerID="536d418166ee54c56a8550cc5c3e8e5c8328113ba2d06a9231fa1c71db5c6035"
	Oct 03 19:38:56 no-preload-643397 kubelet[774]: I1003 19:38:56.341570     774 scope.go:117] "RemoveContainer" containerID="aa979906c9238234a589dc7f071f0a32b32a63d0ca00c51054df57d182702aa3"
	Oct 03 19:38:56 no-preload-643397 kubelet[774]: E1003 19:38:56.341758     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8dq9s_kubernetes-dashboard(339a73b0-9164-4e99-bfc4-ba69ac8b1fc8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8dq9s" podUID="339a73b0-9164-4e99-bfc4-ba69ac8b1fc8"
	Oct 03 19:39:11 no-preload-643397 kubelet[774]: I1003 19:39:11.265250     774 scope.go:117] "RemoveContainer" containerID="aa979906c9238234a589dc7f071f0a32b32a63d0ca00c51054df57d182702aa3"
	Oct 03 19:39:11 no-preload-643397 kubelet[774]: I1003 19:39:11.608971     774 scope.go:117] "RemoveContainer" containerID="aa979906c9238234a589dc7f071f0a32b32a63d0ca00c51054df57d182702aa3"
	Oct 03 19:39:12 no-preload-643397 kubelet[774]: I1003 19:39:12.613115     774 scope.go:117] "RemoveContainer" containerID="9e1e9b4fe19a20d0e1d02f1ab66d7f7479fb8f666b2994af5f888db15ff382d4"
	Oct 03 19:39:12 no-preload-643397 kubelet[774]: E1003 19:39:12.613279     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8dq9s_kubernetes-dashboard(339a73b0-9164-4e99-bfc4-ba69ac8b1fc8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8dq9s" podUID="339a73b0-9164-4e99-bfc4-ba69ac8b1fc8"
	Oct 03 19:39:16 no-preload-643397 kubelet[774]: I1003 19:39:16.341798     774 scope.go:117] "RemoveContainer" containerID="9e1e9b4fe19a20d0e1d02f1ab66d7f7479fb8f666b2994af5f888db15ff382d4"
	Oct 03 19:39:16 no-preload-643397 kubelet[774]: E1003 19:39:16.341972     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8dq9s_kubernetes-dashboard(339a73b0-9164-4e99-bfc4-ba69ac8b1fc8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8dq9s" podUID="339a73b0-9164-4e99-bfc4-ba69ac8b1fc8"
	Oct 03 19:39:16 no-preload-643397 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 03 19:39:16 no-preload-643397 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 03 19:39:16 no-preload-643397 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [8ed7a25aeb889c9f8a8428310aeb66737ce47377bcda2f1f2e1c8885151af962] <==
	2025/10/03 19:38:39 Starting overwatch
	2025/10/03 19:38:39 Using namespace: kubernetes-dashboard
	2025/10/03 19:38:39 Using in-cluster config to connect to apiserver
	2025/10/03 19:38:39 Using secret token for csrf signing
	2025/10/03 19:38:39 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/03 19:38:39 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/03 19:38:39 Successful initial request to the apiserver, version: v1.34.1
	2025/10/03 19:38:39 Generating JWE encryption key
	2025/10/03 19:38:39 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/03 19:38:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/03 19:38:40 Initializing JWE encryption key from synchronized object
	2025/10/03 19:38:40 Creating in-cluster Sidecar client
	2025/10/03 19:38:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/03 19:38:40 Serving insecurely on HTTP port: 9090
	2025/10/03 19:39:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [536d418166ee54c56a8550cc5c3e8e5c8328113ba2d06a9231fa1c71db5c6035] <==
	I1003 19:38:21.534546       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1003 19:38:51.536247       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [aa091721e2bf929a06f8f2a0382b1ac27830c5ef2bedaeb775f4567f2a80447c] <==
	I1003 19:38:51.631564       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1003 19:38:51.631699       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1003 19:38:51.635076       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:38:55.091358       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:38:59.351956       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:39:02.950266       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:39:06.003744       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:39:09.026088       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:39:09.031789       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1003 19:39:09.032078       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1003 19:39:09.032263       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-643397_52ea4327-a05b-4739-9d00-90b553f05ca0!
	I1003 19:39:09.033262       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0d558076-5928-4d46-b528-95f96636eae1", APIVersion:"v1", ResourceVersion:"642", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-643397_52ea4327-a05b-4739-9d00-90b553f05ca0 became leader
	W1003 19:39:09.040445       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:39:09.045385       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1003 19:39:09.132640       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-643397_52ea4327-a05b-4739-9d00-90b553f05ca0!
	W1003 19:39:11.048803       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:39:11.053749       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:39:13.057433       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:39:13.064683       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:39:15.067715       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:39:15.073744       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:39:17.078041       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:39:17.084116       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:39:19.086916       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:39:19.092192       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-643397 -n no-preload-643397
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-643397 -n no-preload-643397: exit status 2 (462.773713ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-643397 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-643397
helpers_test.go:243: (dbg) docker inspect no-preload-643397:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2ff626657df750cf9a1329bdf9d0fad13d27c9b5d259ea3feeee2866dd91e501",
	        "Created": "2025-10-03T19:36:25.722491125Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 478544,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-03T19:38:01.341398026Z",
	            "FinishedAt": "2025-10-03T19:38:00.366153143Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/2ff626657df750cf9a1329bdf9d0fad13d27c9b5d259ea3feeee2866dd91e501/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2ff626657df750cf9a1329bdf9d0fad13d27c9b5d259ea3feeee2866dd91e501/hostname",
	        "HostsPath": "/var/lib/docker/containers/2ff626657df750cf9a1329bdf9d0fad13d27c9b5d259ea3feeee2866dd91e501/hosts",
	        "LogPath": "/var/lib/docker/containers/2ff626657df750cf9a1329bdf9d0fad13d27c9b5d259ea3feeee2866dd91e501/2ff626657df750cf9a1329bdf9d0fad13d27c9b5d259ea3feeee2866dd91e501-json.log",
	        "Name": "/no-preload-643397",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-643397:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-643397",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2ff626657df750cf9a1329bdf9d0fad13d27c9b5d259ea3feeee2866dd91e501",
	                "LowerDir": "/var/lib/docker/overlay2/75229aada1a7c5cdb860071c36cb7ed171994b4cb8c1ec0abce827b45a7f840c-init/diff:/var/lib/docker/overlay2/87b205803817b0b71a214d995ab7e10a92033bbf72d76d6e052f1d21ccecb313/diff",
	                "MergedDir": "/var/lib/docker/overlay2/75229aada1a7c5cdb860071c36cb7ed171994b4cb8c1ec0abce827b45a7f840c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/75229aada1a7c5cdb860071c36cb7ed171994b4cb8c1ec0abce827b45a7f840c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/75229aada1a7c5cdb860071c36cb7ed171994b4cb8c1ec0abce827b45a7f840c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-643397",
	                "Source": "/var/lib/docker/volumes/no-preload-643397/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-643397",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-643397",
	                "name.minikube.sigs.k8s.io": "no-preload-643397",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c3258dbab0862e75fede7d1477febb5b523c6d2e4293667abc9a871b84cc4470",
	            "SandboxKey": "/var/run/docker/netns/c3258dbab086",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33438"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33439"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33442"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33440"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33441"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-643397": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3a:3f:19:06:81:d6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f8dcbeddfcb1aa31ce25637ca1a7b831d4c9bab55d750a9a6b43e000061a3784",
	                    "EndpointID": "b5b7be564bb38f7cbbb6c10acb413cea9545fae3c40093044c46007b0a138ce8",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-643397",
	                        "2ff626657df7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-643397 -n no-preload-643397
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-643397 -n no-preload-643397: exit status 2 (390.656832ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-643397 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-643397 logs -n 25: (1.336187287s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ start   │ -p cert-expiration-324520 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-324520   │ jenkins │ v1.37.0 │ 03 Oct 25 19:32 UTC │ 03 Oct 25 19:33 UTC │
	│ delete  │ -p force-systemd-env-159095                                                                                                                                                                                                                   │ force-systemd-env-159095 │ jenkins │ v1.37.0 │ 03 Oct 25 19:34 UTC │ 03 Oct 25 19:34 UTC │
	│ start   │ -p cert-options-305866 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-305866      │ jenkins │ v1.37.0 │ 03 Oct 25 19:34 UTC │ 03 Oct 25 19:34 UTC │
	│ ssh     │ cert-options-305866 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-305866      │ jenkins │ v1.37.0 │ 03 Oct 25 19:34 UTC │ 03 Oct 25 19:34 UTC │
	│ ssh     │ -p cert-options-305866 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-305866      │ jenkins │ v1.37.0 │ 03 Oct 25 19:34 UTC │ 03 Oct 25 19:34 UTC │
	│ delete  │ -p cert-options-305866                                                                                                                                                                                                                        │ cert-options-305866      │ jenkins │ v1.37.0 │ 03 Oct 25 19:34 UTC │ 03 Oct 25 19:35 UTC │
	│ start   │ -p old-k8s-version-174543 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-174543   │ jenkins │ v1.37.0 │ 03 Oct 25 19:35 UTC │ 03 Oct 25 19:36 UTC │
	│ start   │ -p cert-expiration-324520 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-324520   │ jenkins │ v1.37.0 │ 03 Oct 25 19:36 UTC │ 03 Oct 25 19:36 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-174543 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-174543   │ jenkins │ v1.37.0 │ 03 Oct 25 19:36 UTC │                     │
	│ stop    │ -p old-k8s-version-174543 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-174543   │ jenkins │ v1.37.0 │ 03 Oct 25 19:36 UTC │ 03 Oct 25 19:36 UTC │
	│ delete  │ -p cert-expiration-324520                                                                                                                                                                                                                     │ cert-expiration-324520   │ jenkins │ v1.37.0 │ 03 Oct 25 19:36 UTC │ 03 Oct 25 19:36 UTC │
	│ start   │ -p no-preload-643397 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-643397        │ jenkins │ v1.37.0 │ 03 Oct 25 19:36 UTC │ 03 Oct 25 19:37 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-174543 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-174543   │ jenkins │ v1.37.0 │ 03 Oct 25 19:36 UTC │ 03 Oct 25 19:36 UTC │
	│ start   │ -p old-k8s-version-174543 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-174543   │ jenkins │ v1.37.0 │ 03 Oct 25 19:36 UTC │ 03 Oct 25 19:37 UTC │
	│ image   │ old-k8s-version-174543 image list --format=json                                                                                                                                                                                               │ old-k8s-version-174543   │ jenkins │ v1.37.0 │ 03 Oct 25 19:37 UTC │ 03 Oct 25 19:37 UTC │
	│ pause   │ -p old-k8s-version-174543 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-174543   │ jenkins │ v1.37.0 │ 03 Oct 25 19:37 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-643397 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-643397        │ jenkins │ v1.37.0 │ 03 Oct 25 19:37 UTC │                     │
	│ stop    │ -p no-preload-643397 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-643397        │ jenkins │ v1.37.0 │ 03 Oct 25 19:37 UTC │ 03 Oct 25 19:38 UTC │
	│ delete  │ -p old-k8s-version-174543                                                                                                                                                                                                                     │ old-k8s-version-174543   │ jenkins │ v1.37.0 │ 03 Oct 25 19:37 UTC │ 03 Oct 25 19:37 UTC │
	│ delete  │ -p old-k8s-version-174543                                                                                                                                                                                                                     │ old-k8s-version-174543   │ jenkins │ v1.37.0 │ 03 Oct 25 19:37 UTC │ 03 Oct 25 19:37 UTC │
	│ start   │ -p embed-certs-327416 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-327416       │ jenkins │ v1.37.0 │ 03 Oct 25 19:37 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-643397 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-643397        │ jenkins │ v1.37.0 │ 03 Oct 25 19:38 UTC │ 03 Oct 25 19:38 UTC │
	│ start   │ -p no-preload-643397 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-643397        │ jenkins │ v1.37.0 │ 03 Oct 25 19:38 UTC │ 03 Oct 25 19:39 UTC │
	│ image   │ no-preload-643397 image list --format=json                                                                                                                                                                                                    │ no-preload-643397        │ jenkins │ v1.37.0 │ 03 Oct 25 19:39 UTC │ 03 Oct 25 19:39 UTC │
	│ pause   │ -p no-preload-643397 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-643397        │ jenkins │ v1.37.0 │ 03 Oct 25 19:39 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/03 19:38:00
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 19:38:00.977951  478234 out.go:360] Setting OutFile to fd 1 ...
	I1003 19:38:00.978182  478234 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 19:38:00.978206  478234 out.go:374] Setting ErrFile to fd 2...
	I1003 19:38:00.978227  478234 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 19:38:00.978509  478234 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-284583/.minikube/bin
	I1003 19:38:00.978893  478234 out.go:368] Setting JSON to false
	I1003 19:38:00.979795  478234 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8432,"bootTime":1759511849,"procs":166,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1003 19:38:00.979893  478234 start.go:140] virtualization:  
	I1003 19:38:00.984093  478234 out.go:179] * [no-preload-643397] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1003 19:38:00.988236  478234 out.go:179]   - MINIKUBE_LOCATION=21625
	I1003 19:38:00.988308  478234 notify.go:220] Checking for updates...
	I1003 19:38:00.996960  478234 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 19:38:01.001082  478234 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21625-284583/kubeconfig
	I1003 19:38:01.004999  478234 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-284583/.minikube
	I1003 19:38:01.009272  478234 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1003 19:38:01.011489  478234 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 19:38:01.014997  478234 config.go:182] Loaded profile config "no-preload-643397": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 19:38:01.015564  478234 driver.go:421] Setting default libvirt URI to qemu:///system
	I1003 19:38:01.050661  478234 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1003 19:38:01.050815  478234 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 19:38:01.145976  478234 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-03 19:38:01.134806253 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1003 19:38:01.146091  478234 docker.go:318] overlay module found
	I1003 19:38:01.149984  478234 out.go:179] * Using the docker driver based on existing profile
	I1003 19:38:01.152101  478234 start.go:304] selected driver: docker
	I1003 19:38:01.152118  478234 start.go:924] validating driver "docker" against &{Name:no-preload-643397 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-643397 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 19:38:01.152228  478234 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 19:38:01.153245  478234 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 19:38:01.239818  478234 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-03 19:38:01.229023714 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1003 19:38:01.240177  478234 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 19:38:01.240200  478234 cni.go:84] Creating CNI manager for ""
	I1003 19:38:01.240263  478234 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 19:38:01.240297  478234 start.go:348] cluster config:
	{Name:no-preload-643397 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-643397 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 19:38:01.243727  478234 out.go:179] * Starting "no-preload-643397" primary control-plane node in "no-preload-643397" cluster
	I1003 19:38:01.245969  478234 cache.go:123] Beginning downloading kic base image for docker with crio
	I1003 19:38:01.249090  478234 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1003 19:38:01.252854  478234 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 19:38:01.252944  478234 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1003 19:38:01.253020  478234 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/config.json ...
	I1003 19:38:01.253423  478234 cache.go:107] acquiring lock: {Name:mk7cc8e90392b121da3fc2fa2839cd90be030987 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 19:38:01.253520  478234 cache.go:115] /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1003 19:38:01.253535  478234 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 115.94µs
	I1003 19:38:01.253553  478234 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1003 19:38:01.253570  478234 cache.go:107] acquiring lock: {Name:mk629d4402b8cf97e7e7b39bf007d7d385cd74c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 19:38:01.253607  478234 cache.go:115] /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1003 19:38:01.253618  478234 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 50.06µs
	I1003 19:38:01.253624  478234 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1003 19:38:01.253633  478234 cache.go:107] acquiring lock: {Name:mkd2a56be71d53969ad5666736c12fa03b4cc23b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 19:38:01.253666  478234 cache.go:115] /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1003 19:38:01.253676  478234 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 42.946µs
	I1003 19:38:01.253682  478234 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1003 19:38:01.253692  478234 cache.go:107] acquiring lock: {Name:mk92106990cd186a73d6cc849d81383dcc3cef29 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 19:38:01.253723  478234 cache.go:115] /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1003 19:38:01.253735  478234 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 43.643µs
	I1003 19:38:01.253741  478234 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1003 19:38:01.253750  478234 cache.go:107] acquiring lock: {Name:mkaa4b85211ddf86dbb4a58ea6b27051e9e3e961 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 19:38:01.253776  478234 cache.go:115] /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1003 19:38:01.253787  478234 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 37.129µs
	I1003 19:38:01.253793  478234 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1003 19:38:01.253802  478234 cache.go:107] acquiring lock: {Name:mkb05875322f2d80de3e0a433e30c3b3e43961f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 19:38:01.253842  478234 cache.go:115] /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1003 19:38:01.253851  478234 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 50.569µs
	I1003 19:38:01.253862  478234 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1003 19:38:01.253875  478234 cache.go:107] acquiring lock: {Name:mkf5fb1b6792a0e71c262e68ff69fb567f93ebde Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 19:38:01.253902  478234 cache.go:115] /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1003 19:38:01.253912  478234 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 37.867µs
	I1003 19:38:01.253918  478234 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1003 19:38:01.253285  478234 cache.go:107] acquiring lock: {Name:mk83e5b24e5c429aa699dd46e8de74a53fff017f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 19:38:01.253950  478234 cache.go:115] /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1003 19:38:01.253959  478234 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 687.263µs
	I1003 19:38:01.253965  478234 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21625-284583/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1003 19:38:01.253971  478234 cache.go:87] Successfully saved all images to host disk.
	I1003 19:38:01.281304  478234 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1003 19:38:01.281325  478234 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1003 19:38:01.281339  478234 cache.go:232] Successfully downloaded all kic artifacts
	I1003 19:38:01.281362  478234 start.go:360] acquireMachinesLock for no-preload-643397: {Name:mkd464eef28f143df6be9e03c4b51988b6ba8cf8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 19:38:01.281414  478234 start.go:364] duration metric: took 35.799µs to acquireMachinesLock for "no-preload-643397"
	I1003 19:38:01.281434  478234 start.go:96] Skipping create...Using existing machine configuration
	I1003 19:38:01.281439  478234 fix.go:54] fixHost starting: 
	I1003 19:38:01.281704  478234 cli_runner.go:164] Run: docker container inspect no-preload-643397 --format={{.State.Status}}
	I1003 19:38:01.301841  478234 fix.go:112] recreateIfNeeded on no-preload-643397: state=Stopped err=<nil>
	W1003 19:38:01.301870  478234 fix.go:138] unexpected machine state, will restart: <nil>
	I1003 19:37:58.343732  477208 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21625-284583/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-327416:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.468846269s)
	I1003 19:37:58.343781  477208 kic.go:203] duration metric: took 4.469021484s to extract preloaded images to volume ...
	W1003 19:37:58.343932  477208 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1003 19:37:58.344051  477208 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1003 19:37:58.397758  477208 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-327416 --name embed-certs-327416 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-327416 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-327416 --network embed-certs-327416 --ip 192.168.85.2 --volume embed-certs-327416:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1003 19:37:58.716391  477208 cli_runner.go:164] Run: docker container inspect embed-certs-327416 --format={{.State.Running}}
	I1003 19:37:58.738386  477208 cli_runner.go:164] Run: docker container inspect embed-certs-327416 --format={{.State.Status}}
	I1003 19:37:58.762069  477208 cli_runner.go:164] Run: docker exec embed-certs-327416 stat /var/lib/dpkg/alternatives/iptables
	I1003 19:37:58.811282  477208 oci.go:144] the created container "embed-certs-327416" has a running status.
	I1003 19:37:58.811313  477208 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21625-284583/.minikube/machines/embed-certs-327416/id_rsa...
	I1003 19:37:59.394289  477208 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21625-284583/.minikube/machines/embed-certs-327416/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1003 19:37:59.412251  477208 cli_runner.go:164] Run: docker container inspect embed-certs-327416 --format={{.State.Status}}
	I1003 19:37:59.428450  477208 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1003 19:37:59.428468  477208 kic_runner.go:114] Args: [docker exec --privileged embed-certs-327416 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1003 19:37:59.469854  477208 cli_runner.go:164] Run: docker container inspect embed-certs-327416 --format={{.State.Status}}
	I1003 19:37:59.487325  477208 machine.go:93] provisionDockerMachine start ...
	I1003 19:37:59.487421  477208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-327416
	I1003 19:37:59.504290  477208 main.go:141] libmachine: Using SSH client type: native
	I1003 19:37:59.504617  477208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1003 19:37:59.504634  477208 main.go:141] libmachine: About to run SSH command:
	hostname
	I1003 19:37:59.505271  477208 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58274->127.0.0.1:33433: read: connection reset by peer
	I1003 19:38:02.636645  477208 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-327416
	
	I1003 19:38:02.636668  477208 ubuntu.go:182] provisioning hostname "embed-certs-327416"
	I1003 19:38:02.636791  477208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-327416
	I1003 19:38:02.655162  477208 main.go:141] libmachine: Using SSH client type: native
	I1003 19:38:02.655472  477208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1003 19:38:02.655484  477208 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-327416 && echo "embed-certs-327416" | sudo tee /etc/hostname
	I1003 19:38:02.802630  477208 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-327416
	
	I1003 19:38:02.802771  477208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-327416
	I1003 19:38:02.820286  477208 main.go:141] libmachine: Using SSH client type: native
	I1003 19:38:02.820619  477208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1003 19:38:02.820636  477208 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-327416' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-327416/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-327416' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 19:38:02.953441  477208 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 19:38:02.953472  477208 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21625-284583/.minikube CaCertPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21625-284583/.minikube}
	I1003 19:38:02.953495  477208 ubuntu.go:190] setting up certificates
	I1003 19:38:02.953504  477208 provision.go:84] configureAuth start
	I1003 19:38:02.953562  477208 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-327416
	I1003 19:38:01.306879  478234 out.go:252] * Restarting existing docker container for "no-preload-643397" ...
	I1003 19:38:01.306996  478234 cli_runner.go:164] Run: docker start no-preload-643397
	I1003 19:38:01.577099  478234 cli_runner.go:164] Run: docker container inspect no-preload-643397 --format={{.State.Status}}
	I1003 19:38:01.600399  478234 kic.go:430] container "no-preload-643397" state is running.
	I1003 19:38:01.600869  478234 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-643397
	I1003 19:38:01.623384  478234 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/config.json ...
	I1003 19:38:01.623615  478234 machine.go:93] provisionDockerMachine start ...
	I1003 19:38:01.623674  478234 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-643397
	I1003 19:38:01.644440  478234 main.go:141] libmachine: Using SSH client type: native
	I1003 19:38:01.644946  478234 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1003 19:38:01.644962  478234 main.go:141] libmachine: About to run SSH command:
	hostname
	I1003 19:38:01.645500  478234 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50230->127.0.0.1:33438: read: connection reset by peer
	I1003 19:38:04.790450  478234 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-643397
	
	I1003 19:38:04.790483  478234 ubuntu.go:182] provisioning hostname "no-preload-643397"
	I1003 19:38:04.790591  478234 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-643397
	I1003 19:38:04.819065  478234 main.go:141] libmachine: Using SSH client type: native
	I1003 19:38:04.819379  478234 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1003 19:38:04.819391  478234 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-643397 && echo "no-preload-643397" | sudo tee /etc/hostname
	I1003 19:38:04.982306  478234 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-643397
	
	I1003 19:38:04.982489  478234 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-643397
	I1003 19:38:05.020034  478234 main.go:141] libmachine: Using SSH client type: native
	I1003 19:38:05.020343  478234 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1003 19:38:05.020360  478234 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-643397' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-643397/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-643397' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 19:38:05.169484  478234 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 19:38:05.169514  478234 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21625-284583/.minikube CaCertPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21625-284583/.minikube}
	I1003 19:38:05.169539  478234 ubuntu.go:190] setting up certificates
	I1003 19:38:05.169549  478234 provision.go:84] configureAuth start
	I1003 19:38:05.169613  478234 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-643397
	I1003 19:38:05.193735  478234 provision.go:143] copyHostCerts
	I1003 19:38:05.193803  478234 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-284583/.minikube/ca.pem, removing ...
	I1003 19:38:05.193823  478234 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-284583/.minikube/ca.pem
	I1003 19:38:05.193898  478234 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21625-284583/.minikube/ca.pem (1082 bytes)
	I1003 19:38:05.194000  478234 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-284583/.minikube/cert.pem, removing ...
	I1003 19:38:05.194011  478234 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-284583/.minikube/cert.pem
	I1003 19:38:05.194039  478234 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21625-284583/.minikube/cert.pem (1123 bytes)
	I1003 19:38:05.194095  478234 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-284583/.minikube/key.pem, removing ...
	I1003 19:38:05.194105  478234 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-284583/.minikube/key.pem
	I1003 19:38:05.194130  478234 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21625-284583/.minikube/key.pem (1675 bytes)
	I1003 19:38:05.194183  478234 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21625-284583/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca-key.pem org=jenkins.no-preload-643397 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-643397]
	I1003 19:38:02.983464  477208 provision.go:143] copyHostCerts
	I1003 19:38:02.983525  477208 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-284583/.minikube/ca.pem, removing ...
	I1003 19:38:02.983543  477208 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-284583/.minikube/ca.pem
	I1003 19:38:02.983615  477208 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21625-284583/.minikube/ca.pem (1082 bytes)
	I1003 19:38:02.983703  477208 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-284583/.minikube/cert.pem, removing ...
	I1003 19:38:02.983715  477208 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-284583/.minikube/cert.pem
	I1003 19:38:02.983742  477208 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21625-284583/.minikube/cert.pem (1123 bytes)
	I1003 19:38:02.983807  477208 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-284583/.minikube/key.pem, removing ...
	I1003 19:38:02.983817  477208 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-284583/.minikube/key.pem
	I1003 19:38:02.983841  477208 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21625-284583/.minikube/key.pem (1675 bytes)
	I1003 19:38:02.983896  477208 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21625-284583/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca-key.pem org=jenkins.embed-certs-327416 san=[127.0.0.1 192.168.85.2 embed-certs-327416 localhost minikube]
	I1003 19:38:04.602458  477208 provision.go:177] copyRemoteCerts
	I1003 19:38:04.602531  477208 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 19:38:04.602598  477208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-327416
	I1003 19:38:04.619970  477208 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/embed-certs-327416/id_rsa Username:docker}
	I1003 19:38:04.717872  477208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 19:38:04.744810  477208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1003 19:38:04.763167  477208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1003 19:38:04.781917  477208 provision.go:87] duration metric: took 1.828388937s to configureAuth
	I1003 19:38:04.781946  477208 ubuntu.go:206] setting minikube options for container-runtime
	I1003 19:38:04.782186  477208 config.go:182] Loaded profile config "embed-certs-327416": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 19:38:04.782330  477208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-327416
	I1003 19:38:04.804184  477208 main.go:141] libmachine: Using SSH client type: native
	I1003 19:38:04.804499  477208 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1003 19:38:04.804514  477208 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1003 19:38:05.199104  477208 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1003 19:38:05.199130  477208 machine.go:96] duration metric: took 5.711781672s to provisionDockerMachine
	I1003 19:38:05.199141  477208 client.go:171] duration metric: took 12.008385661s to LocalClient.Create
	I1003 19:38:05.199155  477208 start.go:167] duration metric: took 12.008453452s to libmachine.API.Create "embed-certs-327416"
	I1003 19:38:05.199163  477208 start.go:293] postStartSetup for "embed-certs-327416" (driver="docker")
	I1003 19:38:05.199173  477208 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 19:38:05.199242  477208 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 19:38:05.199295  477208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-327416
	I1003 19:38:05.223831  477208 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/embed-certs-327416/id_rsa Username:docker}
	I1003 19:38:05.323026  477208 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 19:38:05.326838  477208 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1003 19:38:05.326867  477208 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1003 19:38:05.326879  477208 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-284583/.minikube/addons for local assets ...
	I1003 19:38:05.326935  477208 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-284583/.minikube/files for local assets ...
	I1003 19:38:05.327023  477208 filesync.go:149] local asset: /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem -> 2864342.pem in /etc/ssl/certs
	I1003 19:38:05.327134  477208 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 19:38:05.339068  477208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem --> /etc/ssl/certs/2864342.pem (1708 bytes)
	I1003 19:38:05.359609  477208 start.go:296] duration metric: took 160.431486ms for postStartSetup
	I1003 19:38:05.360036  477208 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-327416
	I1003 19:38:05.394480  477208 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/embed-certs-327416/config.json ...
	I1003 19:38:05.394774  477208 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 19:38:05.394828  477208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-327416
	I1003 19:38:05.423150  477208 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/embed-certs-327416/id_rsa Username:docker}
	I1003 19:38:05.518231  477208 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1003 19:38:05.523626  477208 start.go:128] duration metric: took 12.336512765s to createHost
	I1003 19:38:05.523653  477208 start.go:83] releasing machines lock for "embed-certs-327416", held for 12.336644238s
	I1003 19:38:05.523729  477208 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-327416
	I1003 19:38:05.548335  477208 ssh_runner.go:195] Run: cat /version.json
	I1003 19:38:05.548397  477208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-327416
	I1003 19:38:05.548648  477208 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 19:38:05.548708  477208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-327416
	I1003 19:38:05.573848  477208 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/embed-certs-327416/id_rsa Username:docker}
	I1003 19:38:05.583398  477208 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/embed-certs-327416/id_rsa Username:docker}
	I1003 19:38:05.788861  477208 ssh_runner.go:195] Run: systemctl --version
	I1003 19:38:05.796527  477208 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1003 19:38:05.847978  477208 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1003 19:38:05.852759  477208 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 19:38:05.852835  477208 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 19:38:05.886640  477208 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1003 19:38:05.886665  477208 start.go:495] detecting cgroup driver to use...
	I1003 19:38:05.886698  477208 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1003 19:38:05.886752  477208 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 19:38:05.908312  477208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 19:38:05.923274  477208 docker.go:218] disabling cri-docker service (if available) ...
	I1003 19:38:05.923343  477208 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1003 19:38:05.940676  477208 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1003 19:38:05.960884  477208 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1003 19:38:06.110119  477208 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1003 19:38:06.271901  477208 docker.go:234] disabling docker service ...
	I1003 19:38:06.271973  477208 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1003 19:38:06.310157  477208 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1003 19:38:06.325131  477208 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1003 19:38:06.468782  477208 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1003 19:38:06.619698  477208 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1003 19:38:06.639658  477208 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 19:38:06.654816  477208 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1003 19:38:06.654894  477208 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:38:06.664368  477208 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1003 19:38:06.664451  477208 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:38:06.673782  477208 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:38:06.682741  477208 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:38:06.692096  477208 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 19:38:06.700667  477208 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:38:06.709560  477208 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:38:06.724108  477208 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:38:06.733573  477208 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 19:38:06.742476  477208 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 19:38:06.750697  477208 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 19:38:06.892627  477208 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1003 19:38:07.049117  477208 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1003 19:38:07.049188  477208 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1003 19:38:07.055839  477208 start.go:563] Will wait 60s for crictl version
	I1003 19:38:07.055906  477208 ssh_runner.go:195] Run: which crictl
	I1003 19:38:07.059358  477208 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1003 19:38:07.087959  477208 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1003 19:38:07.088042  477208 ssh_runner.go:195] Run: crio --version
	I1003 19:38:07.122269  477208 ssh_runner.go:195] Run: crio --version
	I1003 19:38:07.163347  477208 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1003 19:38:07.166346  477208 cli_runner.go:164] Run: docker network inspect embed-certs-327416 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 19:38:07.192250  477208 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1003 19:38:07.196293  477208 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 19:38:07.209532  477208 kubeadm.go:883] updating cluster {Name:embed-certs-327416 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-327416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1003 19:38:07.209643  477208 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 19:38:07.209706  477208 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 19:38:07.251767  477208 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 19:38:07.251797  477208 crio.go:433] Images already preloaded, skipping extraction
	I1003 19:38:07.251852  477208 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 19:38:07.288163  477208 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 19:38:07.288187  477208 cache_images.go:85] Images are preloaded, skipping loading
	I1003 19:38:07.288196  477208 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1003 19:38:07.288300  477208 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-327416 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-327416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1003 19:38:07.288381  477208 ssh_runner.go:195] Run: crio config
	I1003 19:38:07.380945  477208 cni.go:84] Creating CNI manager for ""
	I1003 19:38:07.380969  477208 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 19:38:07.381013  477208 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1003 19:38:07.381042  477208 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-327416 NodeName:embed-certs-327416 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1003 19:38:07.381223  477208 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-327416"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 19:38:07.381325  477208 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1003 19:38:07.389811  477208 binaries.go:44] Found k8s binaries, skipping transfer
	I1003 19:38:07.389891  477208 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1003 19:38:07.410258  477208 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1003 19:38:07.434409  477208 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 19:38:07.449633  477208 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1003 19:38:07.468204  477208 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1003 19:38:07.472607  477208 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 19:38:07.482380  477208 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 19:38:07.638161  477208 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 19:38:07.655466  477208 certs.go:69] Setting up /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/embed-certs-327416 for IP: 192.168.85.2
	I1003 19:38:07.655485  477208 certs.go:195] generating shared ca certs ...
	I1003 19:38:07.655501  477208 certs.go:227] acquiring lock for ca certs: {Name:mk5a10e6c921326e9c211447576eaeb893259ba7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:38:07.655634  477208 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21625-284583/.minikube/ca.key
	I1003 19:38:07.655671  477208 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.key
	I1003 19:38:07.655678  477208 certs.go:257] generating profile certs ...
	I1003 19:38:07.655731  477208 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/embed-certs-327416/client.key
	I1003 19:38:07.655744  477208 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/embed-certs-327416/client.crt with IP's: []
	I1003 19:38:06.808389  478234 provision.go:177] copyRemoteCerts
	I1003 19:38:06.808503  478234 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 19:38:06.808589  478234 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-643397
	I1003 19:38:06.846599  478234 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/no-preload-643397/id_rsa Username:docker}
	I1003 19:38:06.945720  478234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 19:38:06.965381  478234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1003 19:38:06.985698  478234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1003 19:38:07.008639  478234 provision.go:87] duration metric: took 1.83907464s to configureAuth
	I1003 19:38:07.008718  478234 ubuntu.go:206] setting minikube options for container-runtime
	I1003 19:38:07.008988  478234 config.go:182] Loaded profile config "no-preload-643397": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 19:38:07.009166  478234 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-643397
	I1003 19:38:07.029147  478234 main.go:141] libmachine: Using SSH client type: native
	I1003 19:38:07.029463  478234 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1003 19:38:07.029484  478234 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1003 19:38:07.397011  478234 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1003 19:38:07.397035  478234 machine.go:96] duration metric: took 5.773410963s to provisionDockerMachine
	I1003 19:38:07.397046  478234 start.go:293] postStartSetup for "no-preload-643397" (driver="docker")
	I1003 19:38:07.397056  478234 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 19:38:07.397125  478234 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 19:38:07.397177  478234 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-643397
	I1003 19:38:07.423895  478234 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/no-preload-643397/id_rsa Username:docker}
	I1003 19:38:07.530288  478234 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 19:38:07.533974  478234 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1003 19:38:07.534003  478234 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1003 19:38:07.534014  478234 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-284583/.minikube/addons for local assets ...
	I1003 19:38:07.534074  478234 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-284583/.minikube/files for local assets ...
	I1003 19:38:07.534160  478234 filesync.go:149] local asset: /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem -> 2864342.pem in /etc/ssl/certs
	I1003 19:38:07.534275  478234 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 19:38:07.545922  478234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem --> /etc/ssl/certs/2864342.pem (1708 bytes)
	I1003 19:38:07.570413  478234 start.go:296] duration metric: took 173.352271ms for postStartSetup
	I1003 19:38:07.570493  478234 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 19:38:07.570538  478234 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-643397
	I1003 19:38:07.592176  478234 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/no-preload-643397/id_rsa Username:docker}
	I1003 19:38:07.690430  478234 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1003 19:38:07.695506  478234 fix.go:56] duration metric: took 6.414060463s for fixHost
	I1003 19:38:07.695530  478234 start.go:83] releasing machines lock for "no-preload-643397", held for 6.41410766s
	I1003 19:38:07.695595  478234 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-643397
	I1003 19:38:07.740549  478234 ssh_runner.go:195] Run: cat /version.json
	I1003 19:38:07.740606  478234 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 19:38:07.740615  478234 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-643397
	I1003 19:38:07.740657  478234 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-643397
	I1003 19:38:07.769128  478234 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/no-preload-643397/id_rsa Username:docker}
	I1003 19:38:07.773388  478234 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/no-preload-643397/id_rsa Username:docker}
	I1003 19:38:07.964434  478234 ssh_runner.go:195] Run: systemctl --version
	I1003 19:38:07.970907  478234 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1003 19:38:08.047920  478234 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1003 19:38:08.052456  478234 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 19:38:08.052532  478234 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 19:38:08.063232  478234 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1003 19:38:08.063250  478234 start.go:495] detecting cgroup driver to use...
	I1003 19:38:08.063281  478234 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1003 19:38:08.063330  478234 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 19:38:08.080591  478234 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 19:38:08.101806  478234 docker.go:218] disabling cri-docker service (if available) ...
	I1003 19:38:08.101866  478234 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1003 19:38:08.118335  478234 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1003 19:38:08.132940  478234 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1003 19:38:08.278117  478234 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1003 19:38:08.427254  478234 docker.go:234] disabling docker service ...
	I1003 19:38:08.427318  478234 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1003 19:38:08.443765  478234 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1003 19:38:08.458720  478234 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1003 19:38:08.621252  478234 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1003 19:38:08.791515  478234 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1003 19:38:08.805504  478234 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 19:38:08.819922  478234 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1003 19:38:08.819999  478234 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:38:08.828890  478234 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1003 19:38:08.829017  478234 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:38:08.839433  478234 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:38:08.847837  478234 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:38:08.857547  478234 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 19:38:08.866517  478234 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:38:08.875614  478234 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:38:08.884413  478234 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:38:08.893731  478234 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 19:38:08.902311  478234 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 19:38:08.910409  478234 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 19:38:09.066289  478234 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1003 19:38:09.247110  478234 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1003 19:38:09.247180  478234 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1003 19:38:09.259748  478234 start.go:563] Will wait 60s for crictl version
	I1003 19:38:09.259824  478234 ssh_runner.go:195] Run: which crictl
	I1003 19:38:09.263822  478234 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1003 19:38:09.331819  478234 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1003 19:38:09.331909  478234 ssh_runner.go:195] Run: crio --version
	I1003 19:38:09.409159  478234 ssh_runner.go:195] Run: crio --version
	I1003 19:38:09.466939  478234 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1003 19:38:09.469793  478234 cli_runner.go:164] Run: docker network inspect no-preload-643397 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 19:38:09.494455  478234 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1003 19:38:09.498827  478234 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 19:38:09.511059  478234 kubeadm.go:883] updating cluster {Name:no-preload-643397 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-643397 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1003 19:38:09.511171  478234 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 19:38:09.511234  478234 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 19:38:09.563532  478234 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 19:38:09.563558  478234 cache_images.go:85] Images are preloaded, skipping loading
	I1003 19:38:09.563566  478234 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1003 19:38:09.563670  478234 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-643397 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-643397 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1003 19:38:09.563764  478234 ssh_runner.go:195] Run: crio config
	I1003 19:38:09.658953  478234 cni.go:84] Creating CNI manager for ""
	I1003 19:38:09.658978  478234 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 19:38:09.658993  478234 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1003 19:38:09.659016  478234 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-643397 NodeName:no-preload-643397 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1003 19:38:09.659143  478234 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-643397"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 19:38:09.659218  478234 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1003 19:38:09.675280  478234 binaries.go:44] Found k8s binaries, skipping transfer
	I1003 19:38:09.675351  478234 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1003 19:38:09.683810  478234 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1003 19:38:09.704660  478234 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 19:38:09.720023  478234 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1003 19:38:09.736587  478234 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1003 19:38:09.740497  478234 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 19:38:09.750216  478234 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 19:38:09.894857  478234 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 19:38:09.915168  478234 certs.go:69] Setting up /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397 for IP: 192.168.76.2
	I1003 19:38:09.915189  478234 certs.go:195] generating shared ca certs ...
	I1003 19:38:09.915205  478234 certs.go:227] acquiring lock for ca certs: {Name:mk5a10e6c921326e9c211447576eaeb893259ba7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:38:09.915341  478234 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21625-284583/.minikube/ca.key
	I1003 19:38:09.915393  478234 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.key
	I1003 19:38:09.915405  478234 certs.go:257] generating profile certs ...
	I1003 19:38:09.915491  478234 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/client.key
	I1003 19:38:09.915550  478234 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/apiserver.key.ee2e84a9
	I1003 19:38:09.915599  478234 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/proxy-client.key
	I1003 19:38:09.915716  478234 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/286434.pem (1338 bytes)
	W1003 19:38:09.915751  478234 certs.go:480] ignoring /home/jenkins/minikube-integration/21625-284583/.minikube/certs/286434_empty.pem, impossibly tiny 0 bytes
	I1003 19:38:09.915763  478234 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 19:38:09.915801  478234 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem (1082 bytes)
	I1003 19:38:09.915829  478234 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem (1123 bytes)
	I1003 19:38:09.915854  478234 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/key.pem (1675 bytes)
	I1003 19:38:09.915898  478234 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem (1708 bytes)
	I1003 19:38:09.916552  478234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 19:38:09.943838  478234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1003 19:38:09.992759  478234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 19:38:10.030566  478234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1003 19:38:10.115863  478234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1003 19:38:10.186533  478234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1003 19:38:10.274648  478234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 19:38:10.294439  478234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1003 19:38:10.314600  478234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 19:38:10.334604  478234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/certs/286434.pem --> /usr/share/ca-certificates/286434.pem (1338 bytes)
	I1003 19:38:10.354398  478234 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem --> /usr/share/ca-certificates/2864342.pem (1708 bytes)
	I1003 19:38:10.383009  478234 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 19:38:10.397488  478234 ssh_runner.go:195] Run: openssl version
	I1003 19:38:10.403981  478234 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2864342.pem && ln -fs /usr/share/ca-certificates/2864342.pem /etc/ssl/certs/2864342.pem"
	I1003 19:38:10.413737  478234 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2864342.pem
	I1003 19:38:10.418313  478234 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  3 18:34 /usr/share/ca-certificates/2864342.pem
	I1003 19:38:10.418439  478234 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2864342.pem
	I1003 19:38:10.462044  478234 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2864342.pem /etc/ssl/certs/3ec20f2e.0"
	I1003 19:38:10.470805  478234 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 19:38:10.479786  478234 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 19:38:10.484026  478234 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  3 18:27 /usr/share/ca-certificates/minikubeCA.pem
	I1003 19:38:10.484142  478234 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 19:38:10.527277  478234 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 19:38:10.536242  478234 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/286434.pem && ln -fs /usr/share/ca-certificates/286434.pem /etc/ssl/certs/286434.pem"
	I1003 19:38:10.547112  478234 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/286434.pem
	I1003 19:38:10.551749  478234 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  3 18:34 /usr/share/ca-certificates/286434.pem
	I1003 19:38:10.551874  478234 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/286434.pem
	I1003 19:38:10.596183  478234 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/286434.pem /etc/ssl/certs/51391683.0"
	I1003 19:38:10.605163  478234 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 19:38:10.609910  478234 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1003 19:38:10.653288  478234 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1003 19:38:10.727526  478234 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1003 19:38:10.827622  478234 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1003 19:38:10.916253  478234 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1003 19:38:11.002404  478234 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1003 19:38:11.091051  478234 kubeadm.go:400] StartCluster: {Name:no-preload-643397 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-643397 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 19:38:11.091205  478234 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1003 19:38:11.091314  478234 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1003 19:38:11.331777  478234 cri.go:89] found id: "b652fe32e2a41b7f6685f05ea15d89051280d1a714c5ade044ee7267681f63c0"
	I1003 19:38:11.331811  478234 cri.go:89] found id: "812c215ff131175f339b6cce18e2749be199f4a5f61868272c2e91503fb4ccb8"
	I1003 19:38:11.331817  478234 cri.go:89] found id: "50b207c92dde75b009a0a2439f4af8008c52855e0ddbc54dcf57ab3bd1972302"
	I1003 19:38:11.331821  478234 cri.go:89] found id: "c2a31dbd1b598431e3e46d051690749feb66f319d34b0915aae14a51b8c1b0e2"
	I1003 19:38:11.331824  478234 cri.go:89] found id: ""
	I1003 19:38:11.331874  478234 ssh_runner.go:195] Run: sudo runc list -f json
	W1003 19:38:11.378031  478234 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-03T19:38:11Z" level=error msg="open /run/runc: no such file or directory"
	I1003 19:38:11.378180  478234 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 19:38:11.407248  478234 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1003 19:38:11.407313  478234 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1003 19:38:11.407395  478234 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1003 19:38:11.422166  478234 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1003 19:38:11.422665  478234 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-643397" does not appear in /home/jenkins/minikube-integration/21625-284583/kubeconfig
	I1003 19:38:11.422834  478234 kubeconfig.go:62] /home/jenkins/minikube-integration/21625-284583/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-643397" cluster setting kubeconfig missing "no-preload-643397" context setting]
	I1003 19:38:11.423198  478234 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/kubeconfig: {Name:mkc1323fd87f4a78231a26d2dab0dff7feecf1e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:38:11.424773  478234 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1003 19:38:11.457327  478234 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1003 19:38:11.457362  478234 kubeadm.go:601] duration metric: took 50.030971ms to restartPrimaryControlPlane
	I1003 19:38:11.457371  478234 kubeadm.go:402] duration metric: took 366.341282ms to StartCluster
	I1003 19:38:11.457387  478234 settings.go:142] acquiring lock: {Name:mkc95577dbc448e3409dfa2b5e53a3a1327cb451 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:38:11.457452  478234 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21625-284583/kubeconfig
	I1003 19:38:11.458029  478234 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/kubeconfig: {Name:mkc1323fd87f4a78231a26d2dab0dff7feecf1e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:38:11.458229  478234 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1003 19:38:11.458565  478234 config.go:182] Loaded profile config "no-preload-643397": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 19:38:11.458626  478234 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1003 19:38:11.458696  478234 addons.go:69] Setting storage-provisioner=true in profile "no-preload-643397"
	I1003 19:38:11.458715  478234 addons.go:238] Setting addon storage-provisioner=true in "no-preload-643397"
	W1003 19:38:11.458722  478234 addons.go:247] addon storage-provisioner should already be in state true
	I1003 19:38:11.458748  478234 host.go:66] Checking if "no-preload-643397" exists ...
	I1003 19:38:11.459365  478234 cli_runner.go:164] Run: docker container inspect no-preload-643397 --format={{.State.Status}}
	I1003 19:38:11.459653  478234 addons.go:69] Setting dashboard=true in profile "no-preload-643397"
	I1003 19:38:11.459674  478234 addons.go:238] Setting addon dashboard=true in "no-preload-643397"
	W1003 19:38:11.459681  478234 addons.go:247] addon dashboard should already be in state true
	I1003 19:38:11.459703  478234 host.go:66] Checking if "no-preload-643397" exists ...
	I1003 19:38:11.460109  478234 cli_runner.go:164] Run: docker container inspect no-preload-643397 --format={{.State.Status}}
	I1003 19:38:11.462467  478234 addons.go:69] Setting default-storageclass=true in profile "no-preload-643397"
	I1003 19:38:11.462491  478234 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-643397"
	I1003 19:38:11.462764  478234 cli_runner.go:164] Run: docker container inspect no-preload-643397 --format={{.State.Status}}
	I1003 19:38:11.464838  478234 out.go:179] * Verifying Kubernetes components...
	I1003 19:38:11.468034  478234 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 19:38:11.531035  478234 addons.go:238] Setting addon default-storageclass=true in "no-preload-643397"
	W1003 19:38:11.531058  478234 addons.go:247] addon default-storageclass should already be in state true
	I1003 19:38:11.531083  478234 host.go:66] Checking if "no-preload-643397" exists ...
	I1003 19:38:11.531493  478234 cli_runner.go:164] Run: docker container inspect no-preload-643397 --format={{.State.Status}}
	I1003 19:38:11.538767  478234 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 19:38:11.538827  478234 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1003 19:38:11.541838  478234 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1003 19:38:08.062224  477208 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/embed-certs-327416/client.crt ...
	I1003 19:38:08.062262  477208 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/embed-certs-327416/client.crt: {Name:mkd12e089d2efdef91909060ee8b687b378a7c79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:38:08.062454  477208 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/embed-certs-327416/client.key ...
	I1003 19:38:08.062470  477208 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/embed-certs-327416/client.key: {Name:mkdf04b1a2c3641454003eae37f6bb4de7cadf06 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:38:08.062568  477208 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/embed-certs-327416/apiserver.key.00090923
	I1003 19:38:08.062588  477208 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/embed-certs-327416/apiserver.crt.00090923 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1003 19:38:09.851041  477208 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/embed-certs-327416/apiserver.crt.00090923 ...
	I1003 19:38:09.851081  477208 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/embed-certs-327416/apiserver.crt.00090923: {Name:mk677df1e84177a76aedc7865cd935dc39fc022a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:38:09.851266  477208 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/embed-certs-327416/apiserver.key.00090923 ...
	I1003 19:38:09.851294  477208 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/embed-certs-327416/apiserver.key.00090923: {Name:mkc0e7f828a59dbd78a39b955a29702e00cca82f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:38:09.851378  477208 certs.go:382] copying /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/embed-certs-327416/apiserver.crt.00090923 -> /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/embed-certs-327416/apiserver.crt
	I1003 19:38:09.851473  477208 certs.go:386] copying /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/embed-certs-327416/apiserver.key.00090923 -> /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/embed-certs-327416/apiserver.key
	I1003 19:38:09.851539  477208 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/embed-certs-327416/proxy-client.key
	I1003 19:38:09.851566  477208 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/embed-certs-327416/proxy-client.crt with IP's: []
	I1003 19:38:11.446997  477208 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/embed-certs-327416/proxy-client.crt ...
	I1003 19:38:11.447034  477208 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/embed-certs-327416/proxy-client.crt: {Name:mkc50501e8a07e47ddb1c2b07b860d6b459421fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:38:11.447213  477208 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/embed-certs-327416/proxy-client.key ...
	I1003 19:38:11.447231  477208 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/embed-certs-327416/proxy-client.key: {Name:mka4b48c0876e5acf71c0acf3306176930b77b49 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:38:11.447411  477208 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/286434.pem (1338 bytes)
	W1003 19:38:11.447459  477208 certs.go:480] ignoring /home/jenkins/minikube-integration/21625-284583/.minikube/certs/286434_empty.pem, impossibly tiny 0 bytes
	I1003 19:38:11.447475  477208 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 19:38:11.447503  477208 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem (1082 bytes)
	I1003 19:38:11.447529  477208 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem (1123 bytes)
	I1003 19:38:11.447558  477208 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/key.pem (1675 bytes)
	I1003 19:38:11.447604  477208 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem (1708 bytes)
	I1003 19:38:11.448244  477208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 19:38:11.507863  477208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1003 19:38:11.568933  477208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 19:38:11.599437  477208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1003 19:38:11.657275  477208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/embed-certs-327416/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1003 19:38:11.678841  477208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/embed-certs-327416/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1003 19:38:11.700063  477208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/embed-certs-327416/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 19:38:11.720665  477208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/embed-certs-327416/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1003 19:38:11.742315  477208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem --> /usr/share/ca-certificates/2864342.pem (1708 bytes)
	I1003 19:38:11.764091  477208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 19:38:11.783530  477208 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/certs/286434.pem --> /usr/share/ca-certificates/286434.pem (1338 bytes)
	I1003 19:38:11.825201  477208 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 19:38:11.876264  477208 ssh_runner.go:195] Run: openssl version
	I1003 19:38:11.891256  477208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 19:38:11.909330  477208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 19:38:11.919248  477208 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  3 18:27 /usr/share/ca-certificates/minikubeCA.pem
	I1003 19:38:11.919318  477208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 19:38:11.993610  477208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 19:38:12.002023  477208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/286434.pem && ln -fs /usr/share/ca-certificates/286434.pem /etc/ssl/certs/286434.pem"
	I1003 19:38:12.017848  477208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/286434.pem
	I1003 19:38:12.022773  477208 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  3 18:34 /usr/share/ca-certificates/286434.pem
	I1003 19:38:12.022843  477208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/286434.pem
	I1003 19:38:12.085429  477208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/286434.pem /etc/ssl/certs/51391683.0"
	I1003 19:38:12.097972  477208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2864342.pem && ln -fs /usr/share/ca-certificates/2864342.pem /etc/ssl/certs/2864342.pem"
	I1003 19:38:12.107148  477208 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2864342.pem
	I1003 19:38:12.112963  477208 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  3 18:34 /usr/share/ca-certificates/2864342.pem
	I1003 19:38:12.113052  477208 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2864342.pem
	I1003 19:38:12.173609  477208 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2864342.pem /etc/ssl/certs/3ec20f2e.0"
	I1003 19:38:12.182219  477208 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 19:38:12.189608  477208 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1003 19:38:12.189698  477208 kubeadm.go:400] StartCluster: {Name:embed-certs-327416 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-327416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 19:38:12.189802  477208 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1003 19:38:12.189881  477208 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1003 19:38:12.242839  477208 cri.go:89] found id: ""
	I1003 19:38:12.242994  477208 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 19:38:12.253587  477208 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1003 19:38:12.262634  477208 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1003 19:38:12.262743  477208 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1003 19:38:12.275565  477208 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1003 19:38:12.275634  477208 kubeadm.go:157] found existing configuration files:
	
	I1003 19:38:12.275723  477208 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1003 19:38:12.286255  477208 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1003 19:38:12.286316  477208 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1003 19:38:12.293718  477208 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1003 19:38:12.302876  477208 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1003 19:38:12.302936  477208 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1003 19:38:12.315426  477208 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1003 19:38:12.326127  477208 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1003 19:38:12.326188  477208 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1003 19:38:12.335351  477208 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1003 19:38:12.346200  477208 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1003 19:38:12.346319  477208 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1003 19:38:12.354824  477208 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1003 19:38:12.423564  477208 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1003 19:38:12.423969  477208 kubeadm.go:318] [preflight] Running pre-flight checks
	I1003 19:38:12.453207  477208 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1003 19:38:12.453282  477208 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1003 19:38:12.453324  477208 kubeadm.go:318] OS: Linux
	I1003 19:38:12.453376  477208 kubeadm.go:318] CGROUPS_CPU: enabled
	I1003 19:38:12.453429  477208 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1003 19:38:12.453479  477208 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1003 19:38:12.453531  477208 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1003 19:38:12.453581  477208 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1003 19:38:12.453635  477208 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1003 19:38:12.453684  477208 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1003 19:38:12.453735  477208 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1003 19:38:12.453784  477208 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1003 19:38:12.554720  477208 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1003 19:38:12.554838  477208 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1003 19:38:12.554939  477208 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1003 19:38:12.600211  477208 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1003 19:38:12.606664  477208 out.go:252]   - Generating certificates and keys ...
	I1003 19:38:12.606769  477208 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1003 19:38:12.606842  477208 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1003 19:38:11.541941  478234 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 19:38:11.541951  478234 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1003 19:38:11.542009  478234 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-643397
	I1003 19:38:11.544897  478234 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1003 19:38:11.544923  478234 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1003 19:38:11.544995  478234 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-643397
	I1003 19:38:11.584844  478234 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1003 19:38:11.584868  478234 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1003 19:38:11.584935  478234 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-643397
	I1003 19:38:11.618868  478234 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/no-preload-643397/id_rsa Username:docker}
	I1003 19:38:11.629897  478234 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/no-preload-643397/id_rsa Username:docker}
	I1003 19:38:11.644318  478234 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/no-preload-643397/id_rsa Username:docker}
	I1003 19:38:11.953959  478234 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 19:38:11.977870  478234 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 19:38:12.062590  478234 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1003 19:38:12.112299  478234 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1003 19:38:12.112366  478234 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1003 19:38:12.213329  478234 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1003 19:38:12.213352  478234 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1003 19:38:12.250778  478234 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1003 19:38:12.250798  478234 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1003 19:38:12.391240  478234 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1003 19:38:12.391264  478234 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1003 19:38:12.494554  478234 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1003 19:38:12.494581  478234 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1003 19:38:12.594705  478234 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1003 19:38:12.594730  478234 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1003 19:38:12.629759  478234 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1003 19:38:12.629784  478234 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1003 19:38:12.662013  478234 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1003 19:38:12.662038  478234 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1003 19:38:12.701796  478234 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1003 19:38:12.701821  478234 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1003 19:38:12.731331  478234 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1003 19:38:13.169131  477208 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1003 19:38:14.743707  477208 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1003 19:38:16.420742  477208 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1003 19:38:16.956189  477208 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1003 19:38:17.427282  477208 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1003 19:38:17.427696  477208 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [embed-certs-327416 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1003 19:38:17.699099  477208 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1003 19:38:17.699510  477208 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-327416 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1003 19:38:17.793293  477208 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1003 19:38:18.014838  477208 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1003 19:38:19.234626  477208 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1003 19:38:19.237162  477208 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1003 19:38:19.634461  477208 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1003 19:38:20.071979  477208 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1003 19:38:20.416361  477208 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1003 19:38:20.996135  477208 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1003 19:38:22.275457  477208 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1003 19:38:22.277341  477208 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1003 19:38:22.280123  477208 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1003 19:38:22.809805  478234 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.855802048s)
	I1003 19:38:22.809867  478234 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (10.831972537s)
	I1003 19:38:22.809894  478234 node_ready.go:35] waiting up to 6m0s for node "no-preload-643397" to be "Ready" ...
	I1003 19:38:22.810206  478234 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (10.747588705s)
	I1003 19:38:22.810479  478234 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (10.079115654s)
	I1003 19:38:22.813922  478234 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-643397 addons enable metrics-server
	
	I1003 19:38:22.830998  478234 node_ready.go:49] node "no-preload-643397" is "Ready"
	I1003 19:38:22.831030  478234 node_ready.go:38] duration metric: took 21.113942ms for node "no-preload-643397" to be "Ready" ...
	I1003 19:38:22.831045  478234 api_server.go:52] waiting for apiserver process to appear ...
	I1003 19:38:22.831101  478234 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 19:38:22.853745  478234 api_server.go:72] duration metric: took 11.395480975s to wait for apiserver process to appear ...
	I1003 19:38:22.853773  478234 api_server.go:88] waiting for apiserver healthz status ...
	I1003 19:38:22.853795  478234 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1003 19:38:22.858137  478234 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1003 19:38:22.283549  477208 out.go:252]   - Booting up control plane ...
	I1003 19:38:22.283661  477208 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1003 19:38:22.283894  477208 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1003 19:38:22.294095  477208 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1003 19:38:22.312380  477208 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1003 19:38:22.312824  477208 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1003 19:38:22.323831  477208 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1003 19:38:22.324492  477208 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1003 19:38:22.324791  477208 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1003 19:38:22.531087  477208 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1003 19:38:22.531212  477208 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1003 19:38:22.861013  478234 addons.go:514] duration metric: took 11.402374026s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1003 19:38:22.864909  478234 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1003 19:38:22.866247  478234 api_server.go:141] control plane version: v1.34.1
	I1003 19:38:22.866276  478234 api_server.go:131] duration metric: took 12.496164ms to wait for apiserver health ...
	I1003 19:38:22.866286  478234 system_pods.go:43] waiting for kube-system pods to appear ...
	I1003 19:38:22.873325  478234 system_pods.go:59] 8 kube-system pods found
	I1003 19:38:22.873366  478234 system_pods.go:61] "coredns-66bc5c9577-h8n5p" [d7f4ec9d-9c68-4332-b6c7-e52f424dcd1e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1003 19:38:22.873404  478234 system_pods.go:61] "etcd-no-preload-643397" [642f5548-1caf-4bb4-9780-63e00e8b0a3c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1003 19:38:22.873419  478234 system_pods.go:61] "kindnet-7zwct" [bd0ecfeb-3764-425f-b7ae-e6f5b3e161d8] Running
	I1003 19:38:22.873430  478234 system_pods.go:61] "kube-apiserver-no-preload-643397" [6e4aa6fd-218d-45ce-a0d9-a1736936d2d3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1003 19:38:22.873441  478234 system_pods.go:61] "kube-controller-manager-no-preload-643397" [29843b74-a1d2-46af-ac5e-06f4d53a0ac4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1003 19:38:22.873446  478234 system_pods.go:61] "kube-proxy-lcs2q" [f25c0891-1202-477f-9cc9-5e41c3f1b9fb] Running
	I1003 19:38:22.873473  478234 system_pods.go:61] "kube-scheduler-no-preload-643397" [6865d4a0-3590-465e-81e1-927d271170c0] Running
	I1003 19:38:22.873484  478234 system_pods.go:61] "storage-provisioner" [355c16e4-3158-4ffc-9379-57747ed71cca] Running
	I1003 19:38:22.873492  478234 system_pods.go:74] duration metric: took 7.198254ms to wait for pod list to return data ...
	I1003 19:38:22.873505  478234 default_sa.go:34] waiting for default service account to be created ...
	I1003 19:38:22.880388  478234 default_sa.go:45] found service account: "default"
	I1003 19:38:22.880424  478234 default_sa.go:55] duration metric: took 6.911686ms for default service account to be created ...
	I1003 19:38:22.880451  478234 system_pods.go:116] waiting for k8s-apps to be running ...
	I1003 19:38:22.891458  478234 system_pods.go:86] 8 kube-system pods found
	I1003 19:38:22.891499  478234 system_pods.go:89] "coredns-66bc5c9577-h8n5p" [d7f4ec9d-9c68-4332-b6c7-e52f424dcd1e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1003 19:38:22.891529  478234 system_pods.go:89] "etcd-no-preload-643397" [642f5548-1caf-4bb4-9780-63e00e8b0a3c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1003 19:38:22.891545  478234 system_pods.go:89] "kindnet-7zwct" [bd0ecfeb-3764-425f-b7ae-e6f5b3e161d8] Running
	I1003 19:38:22.891553  478234 system_pods.go:89] "kube-apiserver-no-preload-643397" [6e4aa6fd-218d-45ce-a0d9-a1736936d2d3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1003 19:38:22.891581  478234 system_pods.go:89] "kube-controller-manager-no-preload-643397" [29843b74-a1d2-46af-ac5e-06f4d53a0ac4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1003 19:38:22.891598  478234 system_pods.go:89] "kube-proxy-lcs2q" [f25c0891-1202-477f-9cc9-5e41c3f1b9fb] Running
	I1003 19:38:22.891611  478234 system_pods.go:89] "kube-scheduler-no-preload-643397" [6865d4a0-3590-465e-81e1-927d271170c0] Running
	I1003 19:38:22.891616  478234 system_pods.go:89] "storage-provisioner" [355c16e4-3158-4ffc-9379-57747ed71cca] Running
	I1003 19:38:22.891624  478234 system_pods.go:126] duration metric: took 11.160849ms to wait for k8s-apps to be running ...
	I1003 19:38:22.891651  478234 system_svc.go:44] waiting for kubelet service to be running ....
	I1003 19:38:22.891723  478234 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 19:38:22.904566  478234 system_svc.go:56] duration metric: took 12.907205ms WaitForService to wait for kubelet
	I1003 19:38:22.904635  478234 kubeadm.go:586] duration metric: took 11.446373696s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 19:38:22.904670  478234 node_conditions.go:102] verifying NodePressure condition ...
	I1003 19:38:22.907835  478234 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1003 19:38:22.907908  478234 node_conditions.go:123] node cpu capacity is 2
	I1003 19:38:22.907935  478234 node_conditions.go:105] duration metric: took 3.244684ms to run NodePressure ...
	I1003 19:38:22.907960  478234 start.go:241] waiting for startup goroutines ...
	I1003 19:38:22.907994  478234 start.go:246] waiting for cluster config update ...
	I1003 19:38:22.908024  478234 start.go:255] writing updated cluster config ...
	I1003 19:38:22.908334  478234 ssh_runner.go:195] Run: rm -f paused
	I1003 19:38:22.913846  478234 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1003 19:38:22.918761  478234 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-h8n5p" in "kube-system" namespace to be "Ready" or be gone ...
	W1003 19:38:24.925810  478234 pod_ready.go:104] pod "coredns-66bc5c9577-h8n5p" is not "Ready", error: <nil>
	I1003 19:38:23.533159  477208 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.002026815s
	I1003 19:38:23.536778  477208 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1003 19:38:23.536878  477208 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1003 19:38:23.537112  477208 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1003 19:38:23.537203  477208 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1003 19:38:26.701026  477208 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.163382872s
	W1003 19:38:26.929323  478234 pod_ready.go:104] pod "coredns-66bc5c9577-h8n5p" is not "Ready", error: <nil>
	W1003 19:38:29.428010  478234 pod_ready.go:104] pod "coredns-66bc5c9577-h8n5p" is not "Ready", error: <nil>
	I1003 19:38:31.039510  477208 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 7.502107269s
	I1003 19:38:31.432694  477208 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 7.893676401s
	I1003 19:38:31.462451  477208 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1003 19:38:31.485768  477208 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1003 19:38:31.515781  477208 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1003 19:38:31.516010  477208 kubeadm.go:318] [mark-control-plane] Marking the node embed-certs-327416 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1003 19:38:31.539554  477208 kubeadm.go:318] [bootstrap-token] Using token: 5yu88r.ez5e2j3x2s20vqjm
	I1003 19:38:31.542613  477208 out.go:252]   - Configuring RBAC rules ...
	I1003 19:38:31.542745  477208 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1003 19:38:31.552466  477208 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1003 19:38:31.574884  477208 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1003 19:38:31.582994  477208 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1003 19:38:31.589350  477208 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1003 19:38:31.600254  477208 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1003 19:38:31.838735  477208 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1003 19:38:32.297926  477208 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1003 19:38:32.839629  477208 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1003 19:38:32.840769  477208 kubeadm.go:318] 
	I1003 19:38:32.840857  477208 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1003 19:38:32.840867  477208 kubeadm.go:318] 
	I1003 19:38:32.840948  477208 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1003 19:38:32.840958  477208 kubeadm.go:318] 
	I1003 19:38:32.841010  477208 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1003 19:38:32.841087  477208 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1003 19:38:32.841142  477208 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1003 19:38:32.841146  477208 kubeadm.go:318] 
	I1003 19:38:32.841211  477208 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1003 19:38:32.841218  477208 kubeadm.go:318] 
	I1003 19:38:32.841268  477208 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1003 19:38:32.841279  477208 kubeadm.go:318] 
	I1003 19:38:32.841333  477208 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1003 19:38:32.841412  477208 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1003 19:38:32.841483  477208 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1003 19:38:32.841488  477208 kubeadm.go:318] 
	I1003 19:38:32.841576  477208 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1003 19:38:32.841656  477208 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1003 19:38:32.841668  477208 kubeadm.go:318] 
	I1003 19:38:32.841756  477208 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 5yu88r.ez5e2j3x2s20vqjm \
	I1003 19:38:32.841864  477208 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:f66ff31263aa4cda6b17caa2076838d6a1918275f1c2773b90b119c0d4a4d71a \
	I1003 19:38:32.841885  477208 kubeadm.go:318] 	--control-plane 
	I1003 19:38:32.841890  477208 kubeadm.go:318] 
	I1003 19:38:32.841983  477208 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1003 19:38:32.841988  477208 kubeadm.go:318] 
	I1003 19:38:32.842073  477208 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 5yu88r.ez5e2j3x2s20vqjm \
	I1003 19:38:32.842179  477208 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:f66ff31263aa4cda6b17caa2076838d6a1918275f1c2773b90b119c0d4a4d71a 
	I1003 19:38:32.845828  477208 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1003 19:38:32.846070  477208 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1003 19:38:32.846186  477208 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1003 19:38:32.846196  477208 cni.go:84] Creating CNI manager for ""
	I1003 19:38:32.846203  477208 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 19:38:32.849429  477208 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1003 19:38:32.852320  477208 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1003 19:38:32.856812  477208 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1003 19:38:32.856835  477208 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1003 19:38:32.872270  477208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	W1003 19:38:31.928210  478234 pod_ready.go:104] pod "coredns-66bc5c9577-h8n5p" is not "Ready", error: <nil>
	W1003 19:38:34.424148  478234 pod_ready.go:104] pod "coredns-66bc5c9577-h8n5p" is not "Ready", error: <nil>
	I1003 19:38:33.232410  477208 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1003 19:38:33.232573  477208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 19:38:33.232662  477208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-327416 minikube.k8s.io/updated_at=2025_10_03T19_38_33_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=a43873c79fc22f8b1ccd29d3dfa635d392b09335 minikube.k8s.io/name=embed-certs-327416 minikube.k8s.io/primary=true
	I1003 19:38:33.712259  477208 ops.go:34] apiserver oom_adj: -16
	I1003 19:38:33.712370  477208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 19:38:34.212889  477208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 19:38:34.712547  477208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 19:38:35.212573  477208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 19:38:35.713000  477208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 19:38:36.212858  477208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 19:38:36.713373  477208 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 19:38:36.865885  477208 kubeadm.go:1113] duration metric: took 3.633359094s to wait for elevateKubeSystemPrivileges
	I1003 19:38:36.865912  477208 kubeadm.go:402] duration metric: took 24.676219021s to StartCluster
	I1003 19:38:36.865929  477208 settings.go:142] acquiring lock: {Name:mkc95577dbc448e3409dfa2b5e53a3a1327cb451 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:38:36.865994  477208 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21625-284583/kubeconfig
	I1003 19:38:36.867630  477208 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/kubeconfig: {Name:mkc1323fd87f4a78231a26d2dab0dff7feecf1e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:38:36.873736  477208 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1003 19:38:36.874512  477208 config.go:182] Loaded profile config "embed-certs-327416": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 19:38:36.874585  477208 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1003 19:38:36.874646  477208 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1003 19:38:36.874818  477208 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-327416"
	I1003 19:38:36.874843  477208 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-327416"
	I1003 19:38:36.874864  477208 host.go:66] Checking if "embed-certs-327416" exists ...
	I1003 19:38:36.875343  477208 cli_runner.go:164] Run: docker container inspect embed-certs-327416 --format={{.State.Status}}
	I1003 19:38:36.878052  477208 out.go:179] * Verifying Kubernetes components...
	I1003 19:38:36.881384  477208 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 19:38:36.882607  477208 addons.go:69] Setting default-storageclass=true in profile "embed-certs-327416"
	I1003 19:38:36.882635  477208 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-327416"
	I1003 19:38:36.882970  477208 cli_runner.go:164] Run: docker container inspect embed-certs-327416 --format={{.State.Status}}
	I1003 19:38:36.918963  477208 addons.go:238] Setting addon default-storageclass=true in "embed-certs-327416"
	I1003 19:38:36.919003  477208 host.go:66] Checking if "embed-certs-327416" exists ...
	I1003 19:38:36.919419  477208 cli_runner.go:164] Run: docker container inspect embed-certs-327416 --format={{.State.Status}}
	I1003 19:38:36.928101  477208 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 19:38:36.933297  477208 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 19:38:36.933321  477208 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1003 19:38:36.933389  477208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-327416
	I1003 19:38:36.968698  477208 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/embed-certs-327416/id_rsa Username:docker}
	I1003 19:38:36.980816  477208 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1003 19:38:36.980838  477208 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1003 19:38:36.980900  477208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-327416
	I1003 19:38:37.006826  477208 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/embed-certs-327416/id_rsa Username:docker}
	I1003 19:38:37.444644  477208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1003 19:38:37.450626  477208 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 19:38:37.523452  477208 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 19:38:37.523727  477208 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1003 19:38:39.004422  477208 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.553722295s)
	I1003 19:38:39.004675  477208 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.480901372s)
	I1003 19:38:39.004872  477208 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1003 19:38:39.004845  477208 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.481325558s)
	I1003 19:38:39.006185  477208 node_ready.go:35] waiting up to 6m0s for node "embed-certs-327416" to be "Ready" ...
	I1003 19:38:39.009129  477208 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	W1003 19:38:36.434122  478234 pod_ready.go:104] pod "coredns-66bc5c9577-h8n5p" is not "Ready", error: <nil>
	W1003 19:38:38.437357  478234 pod_ready.go:104] pod "coredns-66bc5c9577-h8n5p" is not "Ready", error: <nil>
	W1003 19:38:40.925089  478234 pod_ready.go:104] pod "coredns-66bc5c9577-h8n5p" is not "Ready", error: <nil>
	I1003 19:38:39.012550  477208 addons.go:514] duration metric: took 2.137895657s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1003 19:38:39.509864  477208 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-327416" context rescaled to 1 replicas
	W1003 19:38:41.010102  477208 node_ready.go:57] node "embed-certs-327416" has "Ready":"False" status (will retry)
	W1003 19:38:43.424174  478234 pod_ready.go:104] pod "coredns-66bc5c9577-h8n5p" is not "Ready", error: <nil>
	W1003 19:38:45.426323  478234 pod_ready.go:104] pod "coredns-66bc5c9577-h8n5p" is not "Ready", error: <nil>
	W1003 19:38:43.012121  477208 node_ready.go:57] node "embed-certs-327416" has "Ready":"False" status (will retry)
	W1003 19:38:45.016030  477208 node_ready.go:57] node "embed-certs-327416" has "Ready":"False" status (will retry)
	W1003 19:38:47.508862  477208 node_ready.go:57] node "embed-certs-327416" has "Ready":"False" status (will retry)
	W1003 19:38:47.923808  478234 pod_ready.go:104] pod "coredns-66bc5c9577-h8n5p" is not "Ready", error: <nil>
	W1003 19:38:49.924330  478234 pod_ready.go:104] pod "coredns-66bc5c9577-h8n5p" is not "Ready", error: <nil>
	W1003 19:38:49.508997  477208 node_ready.go:57] node "embed-certs-327416" has "Ready":"False" status (will retry)
	W1003 19:38:51.510173  477208 node_ready.go:57] node "embed-certs-327416" has "Ready":"False" status (will retry)
	W1003 19:38:52.425011  478234 pod_ready.go:104] pod "coredns-66bc5c9577-h8n5p" is not "Ready", error: <nil>
	W1003 19:38:54.925148  478234 pod_ready.go:104] pod "coredns-66bc5c9577-h8n5p" is not "Ready", error: <nil>
	W1003 19:38:54.010483  477208 node_ready.go:57] node "embed-certs-327416" has "Ready":"False" status (will retry)
	W1003 19:38:56.509685  477208 node_ready.go:57] node "embed-certs-327416" has "Ready":"False" status (will retry)
	W1003 19:38:57.424770  478234 pod_ready.go:104] pod "coredns-66bc5c9577-h8n5p" is not "Ready", error: <nil>
	W1003 19:38:59.425018  478234 pod_ready.go:104] pod "coredns-66bc5c9577-h8n5p" is not "Ready", error: <nil>
	W1003 19:38:59.009804  477208 node_ready.go:57] node "embed-certs-327416" has "Ready":"False" status (will retry)
	W1003 19:39:01.026299  477208 node_ready.go:57] node "embed-certs-327416" has "Ready":"False" status (will retry)
	W1003 19:39:01.925035  478234 pod_ready.go:104] pod "coredns-66bc5c9577-h8n5p" is not "Ready", error: <nil>
	I1003 19:39:02.924328  478234 pod_ready.go:94] pod "coredns-66bc5c9577-h8n5p" is "Ready"
	I1003 19:39:02.924360  478234 pod_ready.go:86] duration metric: took 40.005531941s for pod "coredns-66bc5c9577-h8n5p" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:39:02.926948  478234 pod_ready.go:83] waiting for pod "etcd-no-preload-643397" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:39:02.931787  478234 pod_ready.go:94] pod "etcd-no-preload-643397" is "Ready"
	I1003 19:39:02.931857  478234 pod_ready.go:86] duration metric: took 4.881969ms for pod "etcd-no-preload-643397" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:39:02.934529  478234 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-643397" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:39:02.939172  478234 pod_ready.go:94] pod "kube-apiserver-no-preload-643397" is "Ready"
	I1003 19:39:02.939200  478234 pod_ready.go:86] duration metric: took 4.645937ms for pod "kube-apiserver-no-preload-643397" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:39:02.941614  478234 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-643397" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:39:03.122962  478234 pod_ready.go:94] pod "kube-controller-manager-no-preload-643397" is "Ready"
	I1003 19:39:03.123038  478234 pod_ready.go:86] duration metric: took 181.400022ms for pod "kube-controller-manager-no-preload-643397" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:39:03.323073  478234 pod_ready.go:83] waiting for pod "kube-proxy-lcs2q" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:39:03.723280  478234 pod_ready.go:94] pod "kube-proxy-lcs2q" is "Ready"
	I1003 19:39:03.723310  478234 pod_ready.go:86] duration metric: took 400.211074ms for pod "kube-proxy-lcs2q" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:39:03.922422  478234 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-643397" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:39:04.322850  478234 pod_ready.go:94] pod "kube-scheduler-no-preload-643397" is "Ready"
	I1003 19:39:04.322877  478234 pod_ready.go:86] duration metric: took 400.428154ms for pod "kube-scheduler-no-preload-643397" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:39:04.322890  478234 pod_ready.go:40] duration metric: took 41.408970041s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1003 19:39:04.389109  478234 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1003 19:39:04.392322  478234 out.go:179] * Done! kubectl is now configured to use "no-preload-643397" cluster and "default" namespace by default
	W1003 19:39:03.509050  477208 node_ready.go:57] node "embed-certs-327416" has "Ready":"False" status (will retry)
	W1003 19:39:05.509685  477208 node_ready.go:57] node "embed-certs-327416" has "Ready":"False" status (will retry)
	W1003 19:39:08.012286  477208 node_ready.go:57] node "embed-certs-327416" has "Ready":"False" status (will retry)
	W1003 19:39:10.014610  477208 node_ready.go:57] node "embed-certs-327416" has "Ready":"False" status (will retry)
	W1003 19:39:12.510806  477208 node_ready.go:57] node "embed-certs-327416" has "Ready":"False" status (will retry)
	W1003 19:39:15.012834  477208 node_ready.go:57] node "embed-certs-327416" has "Ready":"False" status (will retry)
	W1003 19:39:17.509950  477208 node_ready.go:57] node "embed-certs-327416" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Oct 03 19:39:01 no-preload-643397 crio[654]: time="2025-10-03T19:39:01.555465742Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 03 19:39:01 no-preload-643397 crio[654]: time="2025-10-03T19:39:01.558649602Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 03 19:39:01 no-preload-643397 crio[654]: time="2025-10-03T19:39:01.558685533Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 03 19:39:01 no-preload-643397 crio[654]: time="2025-10-03T19:39:01.558703281Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 03 19:39:01 no-preload-643397 crio[654]: time="2025-10-03T19:39:01.561772505Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 03 19:39:01 no-preload-643397 crio[654]: time="2025-10-03T19:39:01.561806745Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 03 19:39:01 no-preload-643397 crio[654]: time="2025-10-03T19:39:01.561829252Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 03 19:39:01 no-preload-643397 crio[654]: time="2025-10-03T19:39:01.564956972Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 03 19:39:01 no-preload-643397 crio[654]: time="2025-10-03T19:39:01.564993493Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 03 19:39:01 no-preload-643397 crio[654]: time="2025-10-03T19:39:01.565060858Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 03 19:39:01 no-preload-643397 crio[654]: time="2025-10-03T19:39:01.568236053Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 03 19:39:01 no-preload-643397 crio[654]: time="2025-10-03T19:39:01.568270285Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 03 19:39:11 no-preload-643397 crio[654]: time="2025-10-03T19:39:11.265757322Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=b03489d1-b1b3-48f6-b731-61e1642239eb name=/runtime.v1.ImageService/ImageStatus
	Oct 03 19:39:11 no-preload-643397 crio[654]: time="2025-10-03T19:39:11.266716016Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=58ffe6d8-3ff1-49fa-9d02-8fd99fcebc65 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 19:39:11 no-preload-643397 crio[654]: time="2025-10-03T19:39:11.267708942Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8dq9s/dashboard-metrics-scraper" id=8ad9ccc0-d75f-407f-8d30-6f99eb9d7bc0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:39:11 no-preload-643397 crio[654]: time="2025-10-03T19:39:11.267991449Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 19:39:11 no-preload-643397 crio[654]: time="2025-10-03T19:39:11.274935825Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 19:39:11 no-preload-643397 crio[654]: time="2025-10-03T19:39:11.275919479Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 19:39:11 no-preload-643397 crio[654]: time="2025-10-03T19:39:11.290779069Z" level=info msg="Created container 9e1e9b4fe19a20d0e1d02f1ab66d7f7479fb8f666b2994af5f888db15ff382d4: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8dq9s/dashboard-metrics-scraper" id=8ad9ccc0-d75f-407f-8d30-6f99eb9d7bc0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:39:11 no-preload-643397 crio[654]: time="2025-10-03T19:39:11.29184462Z" level=info msg="Starting container: 9e1e9b4fe19a20d0e1d02f1ab66d7f7479fb8f666b2994af5f888db15ff382d4" id=03c10f19-b1e4-476a-81e1-4bb955c63bf5 name=/runtime.v1.RuntimeService/StartContainer
	Oct 03 19:39:11 no-preload-643397 crio[654]: time="2025-10-03T19:39:11.293559952Z" level=info msg="Started container" PID=1712 containerID=9e1e9b4fe19a20d0e1d02f1ab66d7f7479fb8f666b2994af5f888db15ff382d4 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8dq9s/dashboard-metrics-scraper id=03c10f19-b1e4-476a-81e1-4bb955c63bf5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=fa2c3bf1de5856f8a0ae1764925cf9d85321ea8f1d07f19d8180930c2110e67e
	Oct 03 19:39:11 no-preload-643397 conmon[1710]: conmon 9e1e9b4fe19a20d0e1d0 <ninfo>: container 1712 exited with status 1
	Oct 03 19:39:11 no-preload-643397 crio[654]: time="2025-10-03T19:39:11.610153929Z" level=info msg="Removing container: aa979906c9238234a589dc7f071f0a32b32a63d0ca00c51054df57d182702aa3" id=99a2d696-b0bb-482f-ab5e-87eb9df0436c name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 03 19:39:11 no-preload-643397 crio[654]: time="2025-10-03T19:39:11.617433005Z" level=info msg="Error loading conmon cgroup of container aa979906c9238234a589dc7f071f0a32b32a63d0ca00c51054df57d182702aa3: cgroup deleted" id=99a2d696-b0bb-482f-ab5e-87eb9df0436c name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 03 19:39:11 no-preload-643397 crio[654]: time="2025-10-03T19:39:11.621080281Z" level=info msg="Removed container aa979906c9238234a589dc7f071f0a32b32a63d0ca00c51054df57d182702aa3: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8dq9s/dashboard-metrics-scraper" id=99a2d696-b0bb-482f-ab5e-87eb9df0436c name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	9e1e9b4fe19a2       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           10 seconds ago       Exited              dashboard-metrics-scraper   3                   fa2c3bf1de585       dashboard-metrics-scraper-6ffb444bf9-8dq9s   kubernetes-dashboard
	aa091721e2bf9       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           29 seconds ago       Running             storage-provisioner         2                   8055f22ba63b1       storage-provisioner                          kube-system
	8ed7a25aeb889       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   41 seconds ago       Running             kubernetes-dashboard        0                   eb363cbf331a8       kubernetes-dashboard-855c9754f9-8x6xp        kubernetes-dashboard
	655ef1811e74e       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           About a minute ago   Running             busybox                     1                   4d3225b78f7c8       busybox                                      default
	08858262c4153       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           About a minute ago   Running             coredns                     1                   0b079101aaf55       coredns-66bc5c9577-h8n5p                     kube-system
	9a21627a747b3       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           About a minute ago   Running             kindnet-cni                 1                   38fec71ee5a7c       kindnet-7zwct                                kube-system
	536d418166ee5       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           About a minute ago   Exited              storage-provisioner         1                   8055f22ba63b1       storage-provisioner                          kube-system
	3758592f491ab       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           About a minute ago   Running             kube-proxy                  1                   c91d5a3b983bd       kube-proxy-lcs2q                             kube-system
	b652fe32e2a41       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   69691b1f1c219       etcd-no-preload-643397                       kube-system
	812c215ff1311       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   d4318b9958916       kube-controller-manager-no-preload-643397    kube-system
	50b207c92dde7       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   d8a36802f5f7a       kube-apiserver-no-preload-643397             kube-system
	c2a31dbd1b598       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   31dff4458a38a       kube-scheduler-no-preload-643397             kube-system
	
	
	==> coredns [08858262c415390ebd844284cd70070377a032c8c9eb33572a8ede338609d2c5] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35992 - 33621 "HINFO IN 4915121020754239743.973228478016810188. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.025106233s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               no-preload-643397
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-643397
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a43873c79fc22f8b1ccd29d3dfa635d392b09335
	                    minikube.k8s.io/name=no-preload-643397
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_03T19_37_14_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 03 Oct 2025 19:37:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-643397
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 03 Oct 2025 19:39:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 03 Oct 2025 19:38:40 +0000   Fri, 03 Oct 2025 19:37:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 03 Oct 2025 19:38:40 +0000   Fri, 03 Oct 2025 19:37:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 03 Oct 2025 19:38:40 +0000   Fri, 03 Oct 2025 19:37:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 03 Oct 2025 19:38:40 +0000   Fri, 03 Oct 2025 19:37:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-643397
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 1d54560dca6f48f99b1f04666fc49819
	  System UUID:                acffaaf4-a938-4dce-9b53-3c0346f455b4
	  Boot ID:                    3762136e-8bec-4104-a5cb-0b1976f6048e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 coredns-66bc5c9577-h8n5p                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m2s
	  kube-system                 etcd-no-preload-643397                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m7s
	  kube-system                 kindnet-7zwct                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m2s
	  kube-system                 kube-apiserver-no-preload-643397              250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m7s
	  kube-system                 kube-controller-manager-no-preload-643397     200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 kube-proxy-lcs2q                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 kube-scheduler-no-preload-643397              100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m7s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m1s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-8dq9s    0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-8x6xp         0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 2m                 kube-proxy       
	  Normal   Starting                 58s                kube-proxy       
	  Normal   Starting                 2m8s               kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m8s               kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     2m7s               kubelet          Node no-preload-643397 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m7s               kubelet          Node no-preload-643397 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  2m7s               kubelet          Node no-preload-643397 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           2m3s               node-controller  Node no-preload-643397 event: Registered Node no-preload-643397 in Controller
	  Normal   NodeReady                108s               kubelet          Node no-preload-643397 status is now: NodeReady
	  Normal   Starting                 71s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 71s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  71s (x8 over 71s)  kubelet          Node no-preload-643397 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    71s (x8 over 71s)  kubelet          Node no-preload-643397 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     71s (x8 over 71s)  kubelet          Node no-preload-643397 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           58s                node-controller  Node no-preload-643397 event: Registered Node no-preload-643397 in Controller
	
	
	==> dmesg <==
	[Oct 3 19:09] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:10] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:11] overlayfs: idmapped layers are currently not supported
	[  +4.287643] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:12] overlayfs: idmapped layers are currently not supported
	[ +24.839009] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:13] overlayfs: idmapped layers are currently not supported
	[ +26.493253] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:15] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:16] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:17] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000010] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Oct 3 19:18] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:20] overlayfs: idmapped layers are currently not supported
	[ +32.018892] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:22] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:24] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:26] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:32] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:34] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:35] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:36] overlayfs: idmapped layers are currently not supported
	[  +4.740983] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:38] overlayfs: idmapped layers are currently not supported
	[ +12.897300] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [b652fe32e2a41b7f6685f05ea15d89051280d1a714c5ade044ee7267681f63c0] <==
	{"level":"warn","ts":"2025-10-03T19:38:16.979307Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:38:17.015988Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:38:17.038894Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:38:17.079467Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:38:17.103269Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:38:17.141518Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:38:17.195293Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:38:17.232756Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:38:17.270944Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:38:17.321464Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:38:17.358723Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:38:17.391269Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:38:17.419006Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:38:17.457074Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:38:17.480053Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38008","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:38:17.511853Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:38:17.534699Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:38:17.601549Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:38:17.637937Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:38:17.664039Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:38:17.682687Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:38:17.728062Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:38:17.765235Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:38:17.789239Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:38:17.845301Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38204","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 19:39:21 up  2:21,  0 user,  load average: 3.25, 2.82, 2.20
	Linux no-preload-643397 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9a21627a747b30eb7424912a81297de7e4b519fb2f1252d457725408bd116383] <==
	I1003 19:38:21.238237       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1003 19:38:21.242368       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1003 19:38:21.242507       1 main.go:148] setting mtu 1500 for CNI 
	I1003 19:38:21.242519       1 main.go:178] kindnetd IP family: "ipv4"
	I1003 19:38:21.242533       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-03T19:38:21Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1003 19:38:21.543381       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1003 19:38:21.543408       1 controller.go:381] "Waiting for informer caches to sync"
	I1003 19:38:21.543417       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1003 19:38:21.543704       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1003 19:38:51.543987       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1003 19:38:51.544209       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1003 19:38:51.544296       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1003 19:38:51.544432       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1003 19:38:52.844531       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1003 19:38:52.844563       1 metrics.go:72] Registering metrics
	I1003 19:38:52.844635       1 controller.go:711] "Syncing nftables rules"
	I1003 19:39:01.546223       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1003 19:39:01.546279       1 main.go:301] handling current node
	I1003 19:39:11.550819       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1003 19:39:11.550857       1 main.go:301] handling current node
	I1003 19:39:21.545863       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1003 19:39:21.545898       1 main.go:301] handling current node
	
	
	==> kube-apiserver [50b207c92dde75b009a0a2439f4af8008c52855e0ddbc54dcf57ab3bd1972302] <==
	I1003 19:38:19.660067       1 aggregator.go:171] initial CRD sync complete...
	I1003 19:38:19.660092       1 autoregister_controller.go:144] Starting autoregister controller
	I1003 19:38:19.660100       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1003 19:38:19.660107       1 cache.go:39] Caches are synced for autoregister controller
	I1003 19:38:19.012812       1 repairip.go:210] Starting ipallocator-repair-controller
	I1003 19:38:19.660250       1 shared_informer.go:349] "Waiting for caches to sync" controller="ipallocator-repair-controller"
	I1003 19:38:19.660257       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1003 19:38:19.660353       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1003 19:38:19.013032       1 default_servicecidr_controller.go:111] Starting kubernetes-service-cidr-controller
	I1003 19:38:19.661106       1 shared_informer.go:349] "Waiting for caches to sync" controller="kubernetes-service-cidr-controller"
	I1003 19:38:19.698352       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1003 19:38:19.766533       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1003 19:38:19.776207       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1003 19:38:19.776274       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1003 19:38:20.026968       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1003 19:38:20.185192       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1003 19:38:21.840961       1 controller.go:667] quota admission added evaluator for: namespaces
	I1003 19:38:22.028068       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1003 19:38:22.138259       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1003 19:38:22.190156       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1003 19:38:22.379583       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.165.55"}
	I1003 19:38:22.477591       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.221.55"}
	I1003 19:38:23.895251       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1003 19:38:24.297919       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1003 19:38:24.345019       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [812c215ff131175f339b6cce18e2749be199f4a5f61868272c2e91503fb4ccb8] <==
	I1003 19:38:23.892781       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1003 19:38:23.898125       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1003 19:38:23.899047       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1003 19:38:23.899062       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1003 19:38:23.904179       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1003 19:38:23.908478       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1003 19:38:23.910386       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1003 19:38:23.913598       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1003 19:38:23.918823       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1003 19:38:23.919251       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1003 19:38:23.925493       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1003 19:38:23.927836       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1003 19:38:23.934373       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1003 19:38:23.934513       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1003 19:38:23.934634       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-643397"
	I1003 19:38:23.934703       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1003 19:38:23.937832       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1003 19:38:23.937908       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1003 19:38:23.943604       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1003 19:38:23.943666       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1003 19:38:23.943694       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1003 19:38:23.948777       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1003 19:38:23.951096       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1003 19:38:23.953801       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1003 19:38:23.956579       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	
	
	==> kube-proxy [3758592f491ab78c49e621316a06fabe1198eeb6f1be7d8ed8d05bc65d190237] <==
	I1003 19:38:21.790176       1 server_linux.go:53] "Using iptables proxy"
	I1003 19:38:22.093931       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1003 19:38:22.294957       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1003 19:38:22.295286       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1003 19:38:22.295373       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1003 19:38:22.776851       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1003 19:38:22.776922       1 server_linux.go:132] "Using iptables Proxier"
	I1003 19:38:22.846454       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1003 19:38:22.846758       1 server.go:527] "Version info" version="v1.34.1"
	I1003 19:38:22.846774       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1003 19:38:22.855312       1 config.go:106] "Starting endpoint slice config controller"
	I1003 19:38:22.855335       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1003 19:38:22.855640       1 config.go:200] "Starting service config controller"
	I1003 19:38:22.855659       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1003 19:38:22.866500       1 config.go:403] "Starting serviceCIDR config controller"
	I1003 19:38:22.866629       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1003 19:38:22.877432       1 config.go:309] "Starting node config controller"
	I1003 19:38:22.877527       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1003 19:38:22.877561       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1003 19:38:22.955542       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1003 19:38:22.956795       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1003 19:38:22.966712       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [c2a31dbd1b598431e3e46d051690749feb66f319d34b0915aae14a51b8c1b0e2] <==
	I1003 19:38:15.267084       1 serving.go:386] Generated self-signed cert in-memory
	I1003 19:38:20.923240       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1003 19:38:20.923265       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1003 19:38:20.975315       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1003 19:38:20.975414       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1003 19:38:20.975431       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1003 19:38:20.975483       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1003 19:38:21.014081       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1003 19:38:21.014113       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1003 19:38:21.014133       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1003 19:38:21.014140       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1003 19:38:21.138465       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1003 19:38:21.138916       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1003 19:38:21.178020       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Oct 03 19:38:33 no-preload-643397 kubelet[774]: I1003 19:38:33.478417     774 scope.go:117] "RemoveContainer" containerID="a0a594c0ba53d77dd610f887674b8330cdd03b9f36fb8bc5d80d050bc9a9c948"
	Oct 03 19:38:34 no-preload-643397 kubelet[774]: I1003 19:38:34.483412     774 scope.go:117] "RemoveContainer" containerID="a0a594c0ba53d77dd610f887674b8330cdd03b9f36fb8bc5d80d050bc9a9c948"
	Oct 03 19:38:34 no-preload-643397 kubelet[774]: I1003 19:38:34.483807     774 scope.go:117] "RemoveContainer" containerID="8cb2a1d4a7332c64f343d4090306f882560b05ae38075f8fbf622b19b615d75c"
	Oct 03 19:38:34 no-preload-643397 kubelet[774]: E1003 19:38:34.483983     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8dq9s_kubernetes-dashboard(339a73b0-9164-4e99-bfc4-ba69ac8b1fc8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8dq9s" podUID="339a73b0-9164-4e99-bfc4-ba69ac8b1fc8"
	Oct 03 19:38:35 no-preload-643397 kubelet[774]: I1003 19:38:35.507407     774 scope.go:117] "RemoveContainer" containerID="8cb2a1d4a7332c64f343d4090306f882560b05ae38075f8fbf622b19b615d75c"
	Oct 03 19:38:35 no-preload-643397 kubelet[774]: E1003 19:38:35.507577     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8dq9s_kubernetes-dashboard(339a73b0-9164-4e99-bfc4-ba69ac8b1fc8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8dq9s" podUID="339a73b0-9164-4e99-bfc4-ba69ac8b1fc8"
	Oct 03 19:38:36 no-preload-643397 kubelet[774]: I1003 19:38:36.509936     774 scope.go:117] "RemoveContainer" containerID="8cb2a1d4a7332c64f343d4090306f882560b05ae38075f8fbf622b19b615d75c"
	Oct 03 19:38:36 no-preload-643397 kubelet[774]: E1003 19:38:36.510100     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8dq9s_kubernetes-dashboard(339a73b0-9164-4e99-bfc4-ba69ac8b1fc8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8dq9s" podUID="339a73b0-9164-4e99-bfc4-ba69ac8b1fc8"
	Oct 03 19:38:49 no-preload-643397 kubelet[774]: I1003 19:38:49.266022     774 scope.go:117] "RemoveContainer" containerID="8cb2a1d4a7332c64f343d4090306f882560b05ae38075f8fbf622b19b615d75c"
	Oct 03 19:38:49 no-preload-643397 kubelet[774]: I1003 19:38:49.550977     774 scope.go:117] "RemoveContainer" containerID="8cb2a1d4a7332c64f343d4090306f882560b05ae38075f8fbf622b19b615d75c"
	Oct 03 19:38:50 no-preload-643397 kubelet[774]: I1003 19:38:50.554735     774 scope.go:117] "RemoveContainer" containerID="aa979906c9238234a589dc7f071f0a32b32a63d0ca00c51054df57d182702aa3"
	Oct 03 19:38:50 no-preload-643397 kubelet[774]: E1003 19:38:50.554887     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8dq9s_kubernetes-dashboard(339a73b0-9164-4e99-bfc4-ba69ac8b1fc8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8dq9s" podUID="339a73b0-9164-4e99-bfc4-ba69ac8b1fc8"
	Oct 03 19:38:50 no-preload-643397 kubelet[774]: I1003 19:38:50.569055     774 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-8x6xp" podStartSLOduration=13.639206441 podStartE2EDuration="26.569038167s" podCreationTimestamp="2025-10-03 19:38:24 +0000 UTC" firstStartedPulling="2025-10-03 19:38:26.661400191 +0000 UTC m=+16.743900990" lastFinishedPulling="2025-10-03 19:38:39.591231917 +0000 UTC m=+29.673732716" observedRunningTime="2025-10-03 19:38:40.544994073 +0000 UTC m=+30.627494880" watchObservedRunningTime="2025-10-03 19:38:50.569038167 +0000 UTC m=+40.651538966"
	Oct 03 19:38:51 no-preload-643397 kubelet[774]: I1003 19:38:51.558502     774 scope.go:117] "RemoveContainer" containerID="536d418166ee54c56a8550cc5c3e8e5c8328113ba2d06a9231fa1c71db5c6035"
	Oct 03 19:38:56 no-preload-643397 kubelet[774]: I1003 19:38:56.341570     774 scope.go:117] "RemoveContainer" containerID="aa979906c9238234a589dc7f071f0a32b32a63d0ca00c51054df57d182702aa3"
	Oct 03 19:38:56 no-preload-643397 kubelet[774]: E1003 19:38:56.341758     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8dq9s_kubernetes-dashboard(339a73b0-9164-4e99-bfc4-ba69ac8b1fc8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8dq9s" podUID="339a73b0-9164-4e99-bfc4-ba69ac8b1fc8"
	Oct 03 19:39:11 no-preload-643397 kubelet[774]: I1003 19:39:11.265250     774 scope.go:117] "RemoveContainer" containerID="aa979906c9238234a589dc7f071f0a32b32a63d0ca00c51054df57d182702aa3"
	Oct 03 19:39:11 no-preload-643397 kubelet[774]: I1003 19:39:11.608971     774 scope.go:117] "RemoveContainer" containerID="aa979906c9238234a589dc7f071f0a32b32a63d0ca00c51054df57d182702aa3"
	Oct 03 19:39:12 no-preload-643397 kubelet[774]: I1003 19:39:12.613115     774 scope.go:117] "RemoveContainer" containerID="9e1e9b4fe19a20d0e1d02f1ab66d7f7479fb8f666b2994af5f888db15ff382d4"
	Oct 03 19:39:12 no-preload-643397 kubelet[774]: E1003 19:39:12.613279     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8dq9s_kubernetes-dashboard(339a73b0-9164-4e99-bfc4-ba69ac8b1fc8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8dq9s" podUID="339a73b0-9164-4e99-bfc4-ba69ac8b1fc8"
	Oct 03 19:39:16 no-preload-643397 kubelet[774]: I1003 19:39:16.341798     774 scope.go:117] "RemoveContainer" containerID="9e1e9b4fe19a20d0e1d02f1ab66d7f7479fb8f666b2994af5f888db15ff382d4"
	Oct 03 19:39:16 no-preload-643397 kubelet[774]: E1003 19:39:16.341972     774 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-8dq9s_kubernetes-dashboard(339a73b0-9164-4e99-bfc4-ba69ac8b1fc8)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-8dq9s" podUID="339a73b0-9164-4e99-bfc4-ba69ac8b1fc8"
	Oct 03 19:39:16 no-preload-643397 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 03 19:39:16 no-preload-643397 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 03 19:39:16 no-preload-643397 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [8ed7a25aeb889c9f8a8428310aeb66737ce47377bcda2f1f2e1c8885151af962] <==
	2025/10/03 19:38:39 Using namespace: kubernetes-dashboard
	2025/10/03 19:38:39 Using in-cluster config to connect to apiserver
	2025/10/03 19:38:39 Using secret token for csrf signing
	2025/10/03 19:38:39 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/03 19:38:39 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/03 19:38:39 Successful initial request to the apiserver, version: v1.34.1
	2025/10/03 19:38:39 Generating JWE encryption key
	2025/10/03 19:38:39 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/03 19:38:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/03 19:38:40 Initializing JWE encryption key from synchronized object
	2025/10/03 19:38:40 Creating in-cluster Sidecar client
	2025/10/03 19:38:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/03 19:38:40 Serving insecurely on HTTP port: 9090
	2025/10/03 19:39:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/03 19:38:39 Starting overwatch
	
	
	==> storage-provisioner [536d418166ee54c56a8550cc5c3e8e5c8328113ba2d06a9231fa1c71db5c6035] <==
	I1003 19:38:21.534546       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1003 19:38:51.536247       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [aa091721e2bf929a06f8f2a0382b1ac27830c5ef2bedaeb775f4567f2a80447c] <==
	W1003 19:38:51.635076       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:38:55.091358       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:38:59.351956       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:39:02.950266       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:39:06.003744       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:39:09.026088       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:39:09.031789       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1003 19:39:09.032078       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1003 19:39:09.032263       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-643397_52ea4327-a05b-4739-9d00-90b553f05ca0!
	I1003 19:39:09.033262       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0d558076-5928-4d46-b528-95f96636eae1", APIVersion:"v1", ResourceVersion:"642", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-643397_52ea4327-a05b-4739-9d00-90b553f05ca0 became leader
	W1003 19:39:09.040445       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:39:09.045385       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1003 19:39:09.132640       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-643397_52ea4327-a05b-4739-9d00-90b553f05ca0!
	W1003 19:39:11.048803       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:39:11.053749       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:39:13.057433       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:39:13.064683       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:39:15.067715       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:39:15.073744       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:39:17.078041       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:39:17.084116       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:39:19.086916       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:39:19.092192       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:39:21.095635       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:39:21.105860       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-643397 -n no-preload-643397
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-643397 -n no-preload-643397: exit status 2 (409.998109ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-643397 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (6.55s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.71s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-327416 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-327416 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (347.764081ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-03T19:39:33Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-327416 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-327416 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-327416 describe deploy/metrics-server -n kube-system: exit status 1 (114.723647ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-327416 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-327416
helpers_test.go:243: (dbg) docker inspect embed-certs-327416:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7044b9fbdfefb3fd8bce7381adae2abdcd93d79fb8452cc72e2f26e58ccd8222",
	        "Created": "2025-10-03T19:37:58.41651583Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 477605,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-03T19:37:58.478563151Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/7044b9fbdfefb3fd8bce7381adae2abdcd93d79fb8452cc72e2f26e58ccd8222/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7044b9fbdfefb3fd8bce7381adae2abdcd93d79fb8452cc72e2f26e58ccd8222/hostname",
	        "HostsPath": "/var/lib/docker/containers/7044b9fbdfefb3fd8bce7381adae2abdcd93d79fb8452cc72e2f26e58ccd8222/hosts",
	        "LogPath": "/var/lib/docker/containers/7044b9fbdfefb3fd8bce7381adae2abdcd93d79fb8452cc72e2f26e58ccd8222/7044b9fbdfefb3fd8bce7381adae2abdcd93d79fb8452cc72e2f26e58ccd8222-json.log",
	        "Name": "/embed-certs-327416",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-327416:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-327416",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7044b9fbdfefb3fd8bce7381adae2abdcd93d79fb8452cc72e2f26e58ccd8222",
	                "LowerDir": "/var/lib/docker/overlay2/6d78601b2f0a3bddd2f05c4f4ab25e1cdd9b0b6f0850c52b546e1909596049d0-init/diff:/var/lib/docker/overlay2/87b205803817b0b71a214d995ab7e10a92033bbf72d76d6e052f1d21ccecb313/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6d78601b2f0a3bddd2f05c4f4ab25e1cdd9b0b6f0850c52b546e1909596049d0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6d78601b2f0a3bddd2f05c4f4ab25e1cdd9b0b6f0850c52b546e1909596049d0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6d78601b2f0a3bddd2f05c4f4ab25e1cdd9b0b6f0850c52b546e1909596049d0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-327416",
	                "Source": "/var/lib/docker/volumes/embed-certs-327416/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-327416",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-327416",
	                "name.minikube.sigs.k8s.io": "embed-certs-327416",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "18de0ee3d5f3f95099157fde05e5573b5b913a10dd8a21e1e477f2ef524b85fa",
	            "SandboxKey": "/var/run/docker/netns/18de0ee3d5f3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33433"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33434"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33437"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33435"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33436"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-327416": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:28:5b:e4:68:47",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "438dfcb24f609c25637ab5cf83b5d0d8692bb34419c32369c46f82797d6523d1",
	                    "EndpointID": "86971bd9954c6f73e83d4abbb4b371afa6fc52b213923e5d1bc6237bbd123e4d",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-327416",
	                        "7044b9fbdfef"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-327416 -n embed-certs-327416
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-327416 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-327416 logs -n 25: (1.267274474s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p cert-options-305866                                                                                                                                                                                                                        │ cert-options-305866          │ jenkins │ v1.37.0 │ 03 Oct 25 19:34 UTC │ 03 Oct 25 19:35 UTC │
	│ start   │ -p old-k8s-version-174543 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-174543       │ jenkins │ v1.37.0 │ 03 Oct 25 19:35 UTC │ 03 Oct 25 19:36 UTC │
	│ start   │ -p cert-expiration-324520 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-324520       │ jenkins │ v1.37.0 │ 03 Oct 25 19:36 UTC │ 03 Oct 25 19:36 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-174543 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-174543       │ jenkins │ v1.37.0 │ 03 Oct 25 19:36 UTC │                     │
	│ stop    │ -p old-k8s-version-174543 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-174543       │ jenkins │ v1.37.0 │ 03 Oct 25 19:36 UTC │ 03 Oct 25 19:36 UTC │
	│ delete  │ -p cert-expiration-324520                                                                                                                                                                                                                     │ cert-expiration-324520       │ jenkins │ v1.37.0 │ 03 Oct 25 19:36 UTC │ 03 Oct 25 19:36 UTC │
	│ start   │ -p no-preload-643397 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-643397            │ jenkins │ v1.37.0 │ 03 Oct 25 19:36 UTC │ 03 Oct 25 19:37 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-174543 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-174543       │ jenkins │ v1.37.0 │ 03 Oct 25 19:36 UTC │ 03 Oct 25 19:36 UTC │
	│ start   │ -p old-k8s-version-174543 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-174543       │ jenkins │ v1.37.0 │ 03 Oct 25 19:36 UTC │ 03 Oct 25 19:37 UTC │
	│ image   │ old-k8s-version-174543 image list --format=json                                                                                                                                                                                               │ old-k8s-version-174543       │ jenkins │ v1.37.0 │ 03 Oct 25 19:37 UTC │ 03 Oct 25 19:37 UTC │
	│ pause   │ -p old-k8s-version-174543 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-174543       │ jenkins │ v1.37.0 │ 03 Oct 25 19:37 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-643397 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-643397            │ jenkins │ v1.37.0 │ 03 Oct 25 19:37 UTC │                     │
	│ stop    │ -p no-preload-643397 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-643397            │ jenkins │ v1.37.0 │ 03 Oct 25 19:37 UTC │ 03 Oct 25 19:38 UTC │
	│ delete  │ -p old-k8s-version-174543                                                                                                                                                                                                                     │ old-k8s-version-174543       │ jenkins │ v1.37.0 │ 03 Oct 25 19:37 UTC │ 03 Oct 25 19:37 UTC │
	│ delete  │ -p old-k8s-version-174543                                                                                                                                                                                                                     │ old-k8s-version-174543       │ jenkins │ v1.37.0 │ 03 Oct 25 19:37 UTC │ 03 Oct 25 19:37 UTC │
	│ start   │ -p embed-certs-327416 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-327416           │ jenkins │ v1.37.0 │ 03 Oct 25 19:37 UTC │ 03 Oct 25 19:39 UTC │
	│ addons  │ enable dashboard -p no-preload-643397 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-643397            │ jenkins │ v1.37.0 │ 03 Oct 25 19:38 UTC │ 03 Oct 25 19:38 UTC │
	│ start   │ -p no-preload-643397 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-643397            │ jenkins │ v1.37.0 │ 03 Oct 25 19:38 UTC │ 03 Oct 25 19:39 UTC │
	│ image   │ no-preload-643397 image list --format=json                                                                                                                                                                                                    │ no-preload-643397            │ jenkins │ v1.37.0 │ 03 Oct 25 19:39 UTC │ 03 Oct 25 19:39 UTC │
	│ pause   │ -p no-preload-643397 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-643397            │ jenkins │ v1.37.0 │ 03 Oct 25 19:39 UTC │                     │
	│ delete  │ -p no-preload-643397                                                                                                                                                                                                                          │ no-preload-643397            │ jenkins │ v1.37.0 │ 03 Oct 25 19:39 UTC │ 03 Oct 25 19:39 UTC │
	│ delete  │ -p no-preload-643397                                                                                                                                                                                                                          │ no-preload-643397            │ jenkins │ v1.37.0 │ 03 Oct 25 19:39 UTC │ 03 Oct 25 19:39 UTC │
	│ delete  │ -p disable-driver-mounts-839513                                                                                                                                                                                                               │ disable-driver-mounts-839513 │ jenkins │ v1.37.0 │ 03 Oct 25 19:39 UTC │ 03 Oct 25 19:39 UTC │
	│ start   │ -p default-k8s-diff-port-842797 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-842797 │ jenkins │ v1.37.0 │ 03 Oct 25 19:39 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-327416 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-327416           │ jenkins │ v1.37.0 │ 03 Oct 25 19:39 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/03 19:39:25
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 19:39:25.932942  483467 out.go:360] Setting OutFile to fd 1 ...
	I1003 19:39:25.933180  483467 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 19:39:25.933209  483467 out.go:374] Setting ErrFile to fd 2...
	I1003 19:39:25.933231  483467 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 19:39:25.933503  483467 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-284583/.minikube/bin
	I1003 19:39:25.933971  483467 out.go:368] Setting JSON to false
	I1003 19:39:25.934956  483467 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8517,"bootTime":1759511849,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1003 19:39:25.935043  483467 start.go:140] virtualization:  
	I1003 19:39:25.938750  483467 out.go:179] * [default-k8s-diff-port-842797] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1003 19:39:25.942683  483467 out.go:179]   - MINIKUBE_LOCATION=21625
	I1003 19:39:25.942731  483467 notify.go:220] Checking for updates...
	I1003 19:39:25.945656  483467 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 19:39:25.948620  483467 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21625-284583/kubeconfig
	I1003 19:39:25.951565  483467 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-284583/.minikube
	I1003 19:39:25.954646  483467 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1003 19:39:25.957615  483467 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 19:39:25.961219  483467 config.go:182] Loaded profile config "embed-certs-327416": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 19:39:25.961335  483467 driver.go:421] Setting default libvirt URI to qemu:///system
	I1003 19:39:25.990552  483467 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1003 19:39:25.990701  483467 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 19:39:26.058520  483467 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-03 19:39:26.049123842 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1003 19:39:26.058630  483467 docker.go:318] overlay module found
	I1003 19:39:26.063651  483467 out.go:179] * Using the docker driver based on user configuration
	I1003 19:39:26.066513  483467 start.go:304] selected driver: docker
	I1003 19:39:26.066535  483467 start.go:924] validating driver "docker" against <nil>
	I1003 19:39:26.066551  483467 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 19:39:26.067334  483467 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 19:39:26.124567  483467 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-03 19:39:26.114843277 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1003 19:39:26.124812  483467 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1003 19:39:26.125119  483467 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 19:39:26.128158  483467 out.go:179] * Using Docker driver with root privileges
	I1003 19:39:26.131086  483467 cni.go:84] Creating CNI manager for ""
	I1003 19:39:26.131161  483467 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 19:39:26.131170  483467 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1003 19:39:26.131265  483467 start.go:348] cluster config:
	{Name:default-k8s-diff-port-842797 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-842797 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 19:39:26.136585  483467 out.go:179] * Starting "default-k8s-diff-port-842797" primary control-plane node in "default-k8s-diff-port-842797" cluster
	I1003 19:39:26.139536  483467 cache.go:123] Beginning downloading kic base image for docker with crio
	I1003 19:39:26.142582  483467 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1003 19:39:26.145504  483467 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 19:39:26.145549  483467 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1003 19:39:26.145565  483467 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21625-284583/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1003 19:39:26.145575  483467 cache.go:58] Caching tarball of preloaded images
	I1003 19:39:26.145672  483467 preload.go:233] Found /home/jenkins/minikube-integration/21625-284583/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1003 19:39:26.145682  483467 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1003 19:39:26.145782  483467 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/default-k8s-diff-port-842797/config.json ...
	I1003 19:39:26.145799  483467 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/default-k8s-diff-port-842797/config.json: {Name:mk9f4bcb6918d4aaaced0acedc3031674fa3d10f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:39:26.167200  483467 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1003 19:39:26.167223  483467 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1003 19:39:26.167241  483467 cache.go:232] Successfully downloaded all kic artifacts
	I1003 19:39:26.167264  483467 start.go:360] acquireMachinesLock for default-k8s-diff-port-842797: {Name:mk20e38240481d350e4d3a0db3a5de4e7cd2a493 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 19:39:26.167376  483467 start.go:364] duration metric: took 97.946µs to acquireMachinesLock for "default-k8s-diff-port-842797"
	I1003 19:39:26.167406  483467 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-842797 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-842797 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1003 19:39:26.167486  483467 start.go:125] createHost starting for "" (driver="docker")
	I1003 19:39:26.170965  483467 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1003 19:39:26.171194  483467 start.go:159] libmachine.API.Create for "default-k8s-diff-port-842797" (driver="docker")
	I1003 19:39:26.171245  483467 client.go:168] LocalClient.Create starting
	I1003 19:39:26.171327  483467 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem
	I1003 19:39:26.171366  483467 main.go:141] libmachine: Decoding PEM data...
	I1003 19:39:26.171384  483467 main.go:141] libmachine: Parsing certificate...
	I1003 19:39:26.171460  483467 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem
	I1003 19:39:26.171487  483467 main.go:141] libmachine: Decoding PEM data...
	I1003 19:39:26.171497  483467 main.go:141] libmachine: Parsing certificate...
	I1003 19:39:26.171887  483467 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-842797 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1003 19:39:26.188972  483467 cli_runner.go:211] docker network inspect default-k8s-diff-port-842797 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1003 19:39:26.189056  483467 network_create.go:284] running [docker network inspect default-k8s-diff-port-842797] to gather additional debugging logs...
	I1003 19:39:26.189079  483467 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-842797
	W1003 19:39:26.205816  483467 cli_runner.go:211] docker network inspect default-k8s-diff-port-842797 returned with exit code 1
	I1003 19:39:26.205860  483467 network_create.go:287] error running [docker network inspect default-k8s-diff-port-842797]: docker network inspect default-k8s-diff-port-842797: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-842797 not found
	I1003 19:39:26.205881  483467 network_create.go:289] output of [docker network inspect default-k8s-diff-port-842797]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-842797 not found
	
	** /stderr **
	I1003 19:39:26.206004  483467 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 19:39:26.222657  483467 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-3a8a28910ba8 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6e:7a:d0:f8:54:63} reservation:<nil>}
	I1003 19:39:26.223030  483467 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-157403cbb468 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:8a:ee:cb:12:bf:d0} reservation:<nil>}
	I1003 19:39:26.223283  483467 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8d1e24f7a986 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:9e:1b:b1:d8:1a:13} reservation:<nil>}
	I1003 19:39:26.223711  483467 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a2d1f0}
	I1003 19:39:26.223737  483467 network_create.go:124] attempt to create docker network default-k8s-diff-port-842797 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1003 19:39:26.223793  483467 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-842797 default-k8s-diff-port-842797
	I1003 19:39:26.292484  483467 network_create.go:108] docker network default-k8s-diff-port-842797 192.168.76.0/24 created
	I1003 19:39:26.292519  483467 kic.go:121] calculated static IP "192.168.76.2" for the "default-k8s-diff-port-842797" container
	I1003 19:39:26.292594  483467 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1003 19:39:26.309160  483467 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-842797 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-842797 --label created_by.minikube.sigs.k8s.io=true
	I1003 19:39:26.330884  483467 oci.go:103] Successfully created a docker volume default-k8s-diff-port-842797
	I1003 19:39:26.330979  483467 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-842797-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-842797 --entrypoint /usr/bin/test -v default-k8s-diff-port-842797:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1003 19:39:26.873212  483467 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-842797
	I1003 19:39:26.873268  483467 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 19:39:26.873288  483467 kic.go:194] Starting extracting preloaded images to volume ...
	I1003 19:39:26.873381  483467 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21625-284583/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-842797:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	
	
	==> CRI-O <==
	Oct 03 19:39:20 embed-certs-327416 crio[839]: time="2025-10-03T19:39:20.164973809Z" level=info msg="Created container 86894c4aab971a193d85d21e5d11177f8553ec73ff820dfe9060e2ffcbb00918: kube-system/coredns-66bc5c9577-bjdpd/coredns" id=54fdb315-75d7-4b89-a039-2022068dd9d0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:39:20 embed-certs-327416 crio[839]: time="2025-10-03T19:39:20.166065501Z" level=info msg="Starting container: 86894c4aab971a193d85d21e5d11177f8553ec73ff820dfe9060e2ffcbb00918" id=c3577332-a4c8-4da9-96ec-058d3f89da5d name=/runtime.v1.RuntimeService/StartContainer
	Oct 03 19:39:20 embed-certs-327416 crio[839]: time="2025-10-03T19:39:20.17391262Z" level=info msg="Started container" PID=1740 containerID=86894c4aab971a193d85d21e5d11177f8553ec73ff820dfe9060e2ffcbb00918 description=kube-system/coredns-66bc5c9577-bjdpd/coredns id=c3577332-a4c8-4da9-96ec-058d3f89da5d name=/runtime.v1.RuntimeService/StartContainer sandboxID=d43d7c45d25c882e993813e2df5aeb80d5dded3a8be46cf73e81c463eb130a02
	Oct 03 19:39:23 embed-certs-327416 crio[839]: time="2025-10-03T19:39:23.453693466Z" level=info msg="Running pod sandbox: default/busybox/POD" id=63dc7d01-3633-4d5d-aba0-6a384945c522 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 03 19:39:23 embed-certs-327416 crio[839]: time="2025-10-03T19:39:23.453767674Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 19:39:23 embed-certs-327416 crio[839]: time="2025-10-03T19:39:23.459007606Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:609bee421b816c887bd022b529b708b82ccf5c28bde9f2d12bf0431d63658723 UID:ac0dae91-bdf3-4c0b-b787-6ff828edd312 NetNS:/var/run/netns/2de225f9-025a-4da0-bc3e-2f71ae00915b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40000791c8}] Aliases:map[]}"
	Oct 03 19:39:23 embed-certs-327416 crio[839]: time="2025-10-03T19:39:23.459177258Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 03 19:39:23 embed-certs-327416 crio[839]: time="2025-10-03T19:39:23.470719259Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:609bee421b816c887bd022b529b708b82ccf5c28bde9f2d12bf0431d63658723 UID:ac0dae91-bdf3-4c0b-b787-6ff828edd312 NetNS:/var/run/netns/2de225f9-025a-4da0-bc3e-2f71ae00915b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40000791c8}] Aliases:map[]}"
	Oct 03 19:39:23 embed-certs-327416 crio[839]: time="2025-10-03T19:39:23.471070592Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 03 19:39:23 embed-certs-327416 crio[839]: time="2025-10-03T19:39:23.475034082Z" level=info msg="Ran pod sandbox 609bee421b816c887bd022b529b708b82ccf5c28bde9f2d12bf0431d63658723 with infra container: default/busybox/POD" id=63dc7d01-3633-4d5d-aba0-6a384945c522 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 03 19:39:23 embed-certs-327416 crio[839]: time="2025-10-03T19:39:23.487246748Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=2af3ddac-5155-4e71-9373-7f4964bae917 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 19:39:23 embed-certs-327416 crio[839]: time="2025-10-03T19:39:23.487384876Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=2af3ddac-5155-4e71-9373-7f4964bae917 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 19:39:23 embed-certs-327416 crio[839]: time="2025-10-03T19:39:23.487423908Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=2af3ddac-5155-4e71-9373-7f4964bae917 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 19:39:23 embed-certs-327416 crio[839]: time="2025-10-03T19:39:23.490815722Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=50b39017-c290-4d57-bfa0-026057132809 name=/runtime.v1.ImageService/PullImage
	Oct 03 19:39:23 embed-certs-327416 crio[839]: time="2025-10-03T19:39:23.493276674Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 03 19:39:25 embed-certs-327416 crio[839]: time="2025-10-03T19:39:25.736359056Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=50b39017-c290-4d57-bfa0-026057132809 name=/runtime.v1.ImageService/PullImage
	Oct 03 19:39:25 embed-certs-327416 crio[839]: time="2025-10-03T19:39:25.737460505Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b8805074-8c8a-445f-9db8-68bfc111bfb8 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 19:39:25 embed-certs-327416 crio[839]: time="2025-10-03T19:39:25.741926435Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e6f0a485-a29e-4762-be3f-318259e1b05c name=/runtime.v1.ImageService/ImageStatus
	Oct 03 19:39:25 embed-certs-327416 crio[839]: time="2025-10-03T19:39:25.749333096Z" level=info msg="Creating container: default/busybox/busybox" id=233cc215-e14a-4e6f-b596-206bf70c38af name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:39:25 embed-certs-327416 crio[839]: time="2025-10-03T19:39:25.750325612Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 19:39:25 embed-certs-327416 crio[839]: time="2025-10-03T19:39:25.75526611Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 19:39:25 embed-certs-327416 crio[839]: time="2025-10-03T19:39:25.756035048Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 19:39:25 embed-certs-327416 crio[839]: time="2025-10-03T19:39:25.780450666Z" level=info msg="Created container 2b2a991120e4d38dce8de3082fb9efac0d0ece85972e7537585a3170d0929932: default/busybox/busybox" id=233cc215-e14a-4e6f-b596-206bf70c38af name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:39:25 embed-certs-327416 crio[839]: time="2025-10-03T19:39:25.783946563Z" level=info msg="Starting container: 2b2a991120e4d38dce8de3082fb9efac0d0ece85972e7537585a3170d0929932" id=b09d86e2-6ad9-4b79-b0d5-07d58b3b41b6 name=/runtime.v1.RuntimeService/StartContainer
	Oct 03 19:39:25 embed-certs-327416 crio[839]: time="2025-10-03T19:39:25.789244154Z" level=info msg="Started container" PID=1790 containerID=2b2a991120e4d38dce8de3082fb9efac0d0ece85972e7537585a3170d0929932 description=default/busybox/busybox id=b09d86e2-6ad9-4b79-b0d5-07d58b3b41b6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=609bee421b816c887bd022b529b708b82ccf5c28bde9f2d12bf0431d63658723
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	2b2a991120e4d       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   9 seconds ago        Running             busybox                   0                   609bee421b816       busybox                                      default
	86894c4aab971       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      14 seconds ago       Running             coredns                   0                   d43d7c45d25c8       coredns-66bc5c9577-bjdpd                     kube-system
	a357411ce077d       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      14 seconds ago       Running             storage-provisioner       0                   844fdb0412d5f       storage-provisioner                          kube-system
	21bf03b4b9f02       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      56 seconds ago       Running             kindnet-cni               0                   b62085cd1b194       kindnet-2jswv                                kube-system
	bae0def8268a0       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      56 seconds ago       Running             kube-proxy                0                   01eecc5b45275       kube-proxy-ncw55                             kube-system
	449205c509b43       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   78ef006e85442       etcd-embed-certs-327416                      kube-system
	d6dedb05c69ea       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   0e47679beeb88       kube-scheduler-embed-certs-327416            kube-system
	f962261693f3d       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   1c301de29b7ab       kube-apiserver-embed-certs-327416            kube-system
	55e8dc56b8f81       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   7929472ce5a27       kube-controller-manager-embed-certs-327416   kube-system
	
	
	==> coredns [86894c4aab971a193d85d21e5d11177f8553ec73ff820dfe9060e2ffcbb00918] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:33898 - 13851 "HINFO IN 550456263642926574.4206419025207649495. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.013191225s
	
	
	==> describe nodes <==
	Name:               embed-certs-327416
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-327416
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a43873c79fc22f8b1ccd29d3dfa635d392b09335
	                    minikube.k8s.io/name=embed-certs-327416
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_03T19_38_33_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 03 Oct 2025 19:38:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-327416
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 03 Oct 2025 19:39:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 03 Oct 2025 19:39:33 +0000   Fri, 03 Oct 2025 19:38:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 03 Oct 2025 19:39:33 +0000   Fri, 03 Oct 2025 19:38:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 03 Oct 2025 19:39:33 +0000   Fri, 03 Oct 2025 19:38:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 03 Oct 2025 19:39:33 +0000   Fri, 03 Oct 2025 19:39:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-327416
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 29e7ff88d6ea40a49f34a74c5187dfe5
	  System UUID:                fb79a29c-023c-4bd8-a646-01fac5e931e0
	  Boot ID:                    3762136e-8bec-4104-a5cb-0b1976f6048e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-66bc5c9577-bjdpd                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     58s
	  kube-system                 etcd-embed-certs-327416                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         63s
	  kube-system                 kindnet-2jswv                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      58s
	  kube-system                 kube-apiserver-embed-certs-327416             250m (12%)    0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 kube-controller-manager-embed-certs-327416    200m (10%)    0 (0%)      0 (0%)           0 (0%)         66s
	  kube-system                 kube-proxy-ncw55                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         58s
	  kube-system                 kube-scheduler-embed-certs-327416             100m (5%)     0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 55s                kube-proxy       
	  Normal   NodeHasSufficientMemory  72s (x8 over 72s)  kubelet          Node embed-certs-327416 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    72s (x8 over 72s)  kubelet          Node embed-certs-327416 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     72s (x8 over 72s)  kubelet          Node embed-certs-327416 status is now: NodeHasSufficientPID
	  Normal   Starting                 63s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 63s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  63s                kubelet          Node embed-certs-327416 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    63s                kubelet          Node embed-certs-327416 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     63s                kubelet          Node embed-certs-327416 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           59s                node-controller  Node embed-certs-327416 event: Registered Node embed-certs-327416 in Controller
	  Normal   NodeReady                16s                kubelet          Node embed-certs-327416 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 3 19:09] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:10] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:11] overlayfs: idmapped layers are currently not supported
	[  +4.287643] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:12] overlayfs: idmapped layers are currently not supported
	[ +24.839009] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:13] overlayfs: idmapped layers are currently not supported
	[ +26.493253] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:15] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:16] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:17] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000010] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Oct 3 19:18] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:20] overlayfs: idmapped layers are currently not supported
	[ +32.018892] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:22] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:24] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:26] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:32] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:34] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:35] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:36] overlayfs: idmapped layers are currently not supported
	[  +4.740983] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:38] overlayfs: idmapped layers are currently not supported
	[ +12.897300] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [449205c509b43a68e48afb777dadb07b16a58c431b7df6f78351835da2f20c13] <==
	{"level":"warn","ts":"2025-10-03T19:38:25.859542Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:38:25.875226Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:38:25.893682Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:38:25.918022Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:38:25.930282Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:38:25.949980Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:38:26.006563Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:38:26.061032Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:38:26.061046Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:38:26.107371Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:38:26.149811Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:38:26.175398Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:38:26.213051Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:38:26.236356Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:38:26.276955Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:38:26.318456Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:38:26.355432Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:38:26.375654Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:38:26.426162Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:38:26.461093Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:38:26.492037Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:38:26.521686Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:38:26.647418Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42816","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-03T19:38:28.082465Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2025-10-03T19:38:28.082751Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	
	
	==> kernel <==
	 19:39:35 up  2:22,  0 user,  load average: 2.99, 2.77, 2.19
	Linux embed-certs-327416 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [21bf03b4b9f02167a8bcd260bda3c7e3c1a71bf0154433ac27cda5e5c2f1888e] <==
	I1003 19:38:38.799809       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1003 19:38:38.808825       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1003 19:38:38.808982       1 main.go:148] setting mtu 1500 for CNI 
	I1003 19:38:38.808994       1 main.go:178] kindnetd IP family: "ipv4"
	I1003 19:38:38.809008       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-03T19:38:39Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1003 19:38:39.040644       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1003 19:38:39.041472       1 controller.go:381] "Waiting for informer caches to sync"
	I1003 19:38:39.041841       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1003 19:38:39.042315       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1003 19:39:09.040958       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1003 19:39:09.042286       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1003 19:39:09.045801       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1003 19:39:09.048150       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1003 19:39:10.342414       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1003 19:39:10.342467       1 metrics.go:72] Registering metrics
	I1003 19:39:10.342683       1 controller.go:711] "Syncing nftables rules"
	I1003 19:39:19.042483       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1003 19:39:19.042531       1 main.go:301] handling current node
	I1003 19:39:29.036123       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1003 19:39:29.036157       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f962261693f3d7c522c19bae69c51655a3e65acbea55e9e723c8d8d8208bb036] <==
	I1003 19:38:28.529070       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1003 19:38:28.556608       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1003 19:38:28.564527       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1003 19:38:28.564681       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1003 19:38:28.607765       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1003 19:38:28.609400       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1003 19:38:28.827451       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1003 19:38:28.955182       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1003 19:38:28.992162       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1003 19:38:29.000475       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1003 19:38:30.810355       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1003 19:38:30.903552       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1003 19:38:31.086123       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1003 19:38:31.111714       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1003 19:38:31.113412       1 controller.go:667] quota admission added evaluator for: endpoints
	I1003 19:38:31.131270       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1003 19:38:31.543135       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1003 19:38:32.261108       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1003 19:38:32.294801       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1003 19:38:32.334997       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1003 19:38:37.173163       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1003 19:38:37.189807       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1003 19:38:37.636296       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1003 19:38:37.664312       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1003 19:39:33.345208       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:59140: use of closed network connection
	
	
	==> kube-controller-manager [55e8dc56b8f81c9b23ba99427306fdcc4119f30d2cb054479ce3cdad3aa295db] <==
	I1003 19:38:36.622162       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1003 19:38:36.622196       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1003 19:38:36.622294       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1003 19:38:36.622321       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1003 19:38:36.622354       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1003 19:38:36.622389       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1003 19:38:36.622417       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1003 19:38:36.623760       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1003 19:38:36.628230       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1003 19:38:36.631780       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1003 19:38:36.631918       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1003 19:38:36.638280       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1003 19:38:36.643209       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1003 19:38:36.649542       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1003 19:38:36.651954       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1003 19:38:36.652262       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1003 19:38:36.669374       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1003 19:38:36.669615       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1003 19:38:36.669632       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1003 19:38:36.669639       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1003 19:38:36.669692       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1003 19:38:36.670264       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1003 19:38:36.671400       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1003 19:38:36.671559       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1003 19:39:21.627476       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [bae0def8268a0f418cff53e1ed4a5e1ea7daf2c2ed16a5413106193c0acbd083] <==
	I1003 19:38:38.890673       1 server_linux.go:53] "Using iptables proxy"
	I1003 19:38:39.214424       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1003 19:38:39.315295       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1003 19:38:39.315382       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1003 19:38:39.315489       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1003 19:38:39.378730       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1003 19:38:39.378799       1 server_linux.go:132] "Using iptables Proxier"
	I1003 19:38:39.386594       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1003 19:38:39.386965       1 server.go:527] "Version info" version="v1.34.1"
	I1003 19:38:39.387018       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1003 19:38:39.388780       1 config.go:200] "Starting service config controller"
	I1003 19:38:39.388854       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1003 19:38:39.389099       1 config.go:106] "Starting endpoint slice config controller"
	I1003 19:38:39.389119       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1003 19:38:39.389139       1 config.go:403] "Starting serviceCIDR config controller"
	I1003 19:38:39.389144       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1003 19:38:39.389639       1 config.go:309] "Starting node config controller"
	I1003 19:38:39.389655       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1003 19:38:39.489008       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1003 19:38:39.489148       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1003 19:38:39.489178       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1003 19:38:39.489685       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [d6dedb05c69ea4e187cab72dcdb927490dde89bf8d4343e5100e0be04280ef08] <==
	I1003 19:38:26.348517       1 serving.go:386] Generated self-signed cert in-memory
	I1003 19:38:31.378266       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1003 19:38:31.378570       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1003 19:38:31.384273       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1003 19:38:31.384696       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1003 19:38:31.384791       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1003 19:38:31.384849       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1003 19:38:31.421310       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1003 19:38:31.457681       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1003 19:38:31.424199       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1003 19:38:31.457816       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1003 19:38:31.491667       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1003 19:38:31.558371       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1003 19:38:31.558487       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 03 19:38:33 embed-certs-327416 kubelet[1321]: I1003 19:38:33.808866    1321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-327416" podStartSLOduration=1.80884811 podStartE2EDuration="1.80884811s" podCreationTimestamp="2025-10-03 19:38:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-03 19:38:33.788865351 +0000 UTC m=+1.636221026" watchObservedRunningTime="2025-10-03 19:38:33.80884811 +0000 UTC m=+1.656203777"
	Oct 03 19:38:36 embed-certs-327416 kubelet[1321]: I1003 19:38:36.681951    1321 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 03 19:38:36 embed-certs-327416 kubelet[1321]: I1003 19:38:36.682743    1321 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 03 19:38:37 embed-certs-327416 kubelet[1321]: I1003 19:38:37.869925    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/54ac7a9a-424b-4c7e-94a8-5a15bc1d91c2-lib-modules\") pod \"kube-proxy-ncw55\" (UID: \"54ac7a9a-424b-4c7e-94a8-5a15bc1d91c2\") " pod="kube-system/kube-proxy-ncw55"
	Oct 03 19:38:37 embed-certs-327416 kubelet[1321]: I1003 19:38:37.869982    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b05191d5-b4b3-42d6-8488-25e3b30ad1a1-lib-modules\") pod \"kindnet-2jswv\" (UID: \"b05191d5-b4b3-42d6-8488-25e3b30ad1a1\") " pod="kube-system/kindnet-2jswv"
	Oct 03 19:38:37 embed-certs-327416 kubelet[1321]: I1003 19:38:37.870004    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mt894\" (UniqueName: \"kubernetes.io/projected/b05191d5-b4b3-42d6-8488-25e3b30ad1a1-kube-api-access-mt894\") pod \"kindnet-2jswv\" (UID: \"b05191d5-b4b3-42d6-8488-25e3b30ad1a1\") " pod="kube-system/kindnet-2jswv"
	Oct 03 19:38:37 embed-certs-327416 kubelet[1321]: I1003 19:38:37.870027    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/b05191d5-b4b3-42d6-8488-25e3b30ad1a1-cni-cfg\") pod \"kindnet-2jswv\" (UID: \"b05191d5-b4b3-42d6-8488-25e3b30ad1a1\") " pod="kube-system/kindnet-2jswv"
	Oct 03 19:38:37 embed-certs-327416 kubelet[1321]: I1003 19:38:37.870046    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/54ac7a9a-424b-4c7e-94a8-5a15bc1d91c2-kube-proxy\") pod \"kube-proxy-ncw55\" (UID: \"54ac7a9a-424b-4c7e-94a8-5a15bc1d91c2\") " pod="kube-system/kube-proxy-ncw55"
	Oct 03 19:38:37 embed-certs-327416 kubelet[1321]: I1003 19:38:37.870067    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/54ac7a9a-424b-4c7e-94a8-5a15bc1d91c2-xtables-lock\") pod \"kube-proxy-ncw55\" (UID: \"54ac7a9a-424b-4c7e-94a8-5a15bc1d91c2\") " pod="kube-system/kube-proxy-ncw55"
	Oct 03 19:38:37 embed-certs-327416 kubelet[1321]: I1003 19:38:37.870084    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mr4nt\" (UniqueName: \"kubernetes.io/projected/54ac7a9a-424b-4c7e-94a8-5a15bc1d91c2-kube-api-access-mr4nt\") pod \"kube-proxy-ncw55\" (UID: \"54ac7a9a-424b-4c7e-94a8-5a15bc1d91c2\") " pod="kube-system/kube-proxy-ncw55"
	Oct 03 19:38:37 embed-certs-327416 kubelet[1321]: I1003 19:38:37.870110    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b05191d5-b4b3-42d6-8488-25e3b30ad1a1-xtables-lock\") pod \"kindnet-2jswv\" (UID: \"b05191d5-b4b3-42d6-8488-25e3b30ad1a1\") " pod="kube-system/kindnet-2jswv"
	Oct 03 19:38:38 embed-certs-327416 kubelet[1321]: I1003 19:38:38.111164    1321 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 03 19:38:38 embed-certs-327416 kubelet[1321]: W1003 19:38:38.418249    1321 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/7044b9fbdfefb3fd8bce7381adae2abdcd93d79fb8452cc72e2f26e58ccd8222/crio-01eecc5b4527561171253cc278f6d93a45d7438c257c504e305057c05086f4e0 WatchSource:0}: Error finding container 01eecc5b4527561171253cc278f6d93a45d7438c257c504e305057c05086f4e0: Status 404 returned error can't find the container with id 01eecc5b4527561171253cc278f6d93a45d7438c257c504e305057c05086f4e0
	Oct 03 19:38:38 embed-certs-327416 kubelet[1321]: W1003 19:38:38.470181    1321 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/7044b9fbdfefb3fd8bce7381adae2abdcd93d79fb8452cc72e2f26e58ccd8222/crio-b62085cd1b1945507985163b7488d4a78ea942050f636ebefdacba1d1bc40e9e WatchSource:0}: Error finding container b62085cd1b1945507985163b7488d4a78ea942050f636ebefdacba1d1bc40e9e: Status 404 returned error can't find the container with id b62085cd1b1945507985163b7488d4a78ea942050f636ebefdacba1d1bc40e9e
	Oct 03 19:38:39 embed-certs-327416 kubelet[1321]: I1003 19:38:39.667032    1321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-2jswv" podStartSLOduration=2.66701154 podStartE2EDuration="2.66701154s" podCreationTimestamp="2025-10-03 19:38:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-03 19:38:39.612122095 +0000 UTC m=+7.459477770" watchObservedRunningTime="2025-10-03 19:38:39.66701154 +0000 UTC m=+7.514367223"
	Oct 03 19:38:42 embed-certs-327416 kubelet[1321]: I1003 19:38:42.525979    1321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ncw55" podStartSLOduration=5.525958996 podStartE2EDuration="5.525958996s" podCreationTimestamp="2025-10-03 19:38:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-03 19:38:39.711746304 +0000 UTC m=+7.559102078" watchObservedRunningTime="2025-10-03 19:38:42.525958996 +0000 UTC m=+10.373314663"
	Oct 03 19:39:19 embed-certs-327416 kubelet[1321]: I1003 19:39:19.583978    1321 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 03 19:39:19 embed-certs-327416 kubelet[1321]: I1003 19:39:19.681928    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/b02f2aae-4045-452f-aaac-e4bf1daea610-tmp\") pod \"storage-provisioner\" (UID: \"b02f2aae-4045-452f-aaac-e4bf1daea610\") " pod="kube-system/storage-provisioner"
	Oct 03 19:39:19 embed-certs-327416 kubelet[1321]: I1003 19:39:19.681997    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/17c509e4-9d58-4e2e-9a05-3e6eb361dc8a-config-volume\") pod \"coredns-66bc5c9577-bjdpd\" (UID: \"17c509e4-9d58-4e2e-9a05-3e6eb361dc8a\") " pod="kube-system/coredns-66bc5c9577-bjdpd"
	Oct 03 19:39:19 embed-certs-327416 kubelet[1321]: I1003 19:39:19.682019    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbnc2\" (UniqueName: \"kubernetes.io/projected/17c509e4-9d58-4e2e-9a05-3e6eb361dc8a-kube-api-access-kbnc2\") pod \"coredns-66bc5c9577-bjdpd\" (UID: \"17c509e4-9d58-4e2e-9a05-3e6eb361dc8a\") " pod="kube-system/coredns-66bc5c9577-bjdpd"
	Oct 03 19:39:19 embed-certs-327416 kubelet[1321]: I1003 19:39:19.682042    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4sdd\" (UniqueName: \"kubernetes.io/projected/b02f2aae-4045-452f-aaac-e4bf1daea610-kube-api-access-v4sdd\") pod \"storage-provisioner\" (UID: \"b02f2aae-4045-452f-aaac-e4bf1daea610\") " pod="kube-system/storage-provisioner"
	Oct 03 19:39:19 embed-certs-327416 kubelet[1321]: W1003 19:39:19.970494    1321 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/7044b9fbdfefb3fd8bce7381adae2abdcd93d79fb8452cc72e2f26e58ccd8222/crio-844fdb0412d5f8db6a7e9da01a67d603f97f953c593d00d4f39571f837e809ec WatchSource:0}: Error finding container 844fdb0412d5f8db6a7e9da01a67d603f97f953c593d00d4f39571f837e809ec: Status 404 returned error can't find the container with id 844fdb0412d5f8db6a7e9da01a67d603f97f953c593d00d4f39571f837e809ec
	Oct 03 19:39:20 embed-certs-327416 kubelet[1321]: I1003 19:39:20.713369    1321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=42.713342672 podStartE2EDuration="42.713342672s" podCreationTimestamp="2025-10-03 19:38:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-03 19:39:20.690773122 +0000 UTC m=+48.538128805" watchObservedRunningTime="2025-10-03 19:39:20.713342672 +0000 UTC m=+48.560698355"
	Oct 03 19:39:23 embed-certs-327416 kubelet[1321]: I1003 19:39:23.143424    1321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-bjdpd" podStartSLOduration=46.143404223 podStartE2EDuration="46.143404223s" podCreationTimestamp="2025-10-03 19:38:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-03 19:39:20.716949529 +0000 UTC m=+48.564305229" watchObservedRunningTime="2025-10-03 19:39:23.143404223 +0000 UTC m=+50.990759889"
	Oct 03 19:39:23 embed-certs-327416 kubelet[1321]: I1003 19:39:23.211229    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8drw\" (UniqueName: \"kubernetes.io/projected/ac0dae91-bdf3-4c0b-b787-6ff828edd312-kube-api-access-n8drw\") pod \"busybox\" (UID: \"ac0dae91-bdf3-4c0b-b787-6ff828edd312\") " pod="default/busybox"
	
	
	==> storage-provisioner [a357411ce077d77d02a0d3cd3249c7bacdb05d4b0568260d2066442c3e8120ad] <==
	I1003 19:39:20.107407       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1003 19:39:20.121471       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1003 19:39:20.121518       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1003 19:39:20.127948       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:39:20.154149       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1003 19:39:20.159785       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1003 19:39:20.160077       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-327416_4ab96349-6c93-48a9-a46c-9352df17b5af!
	I1003 19:39:20.160244       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"879423f6-b7ff-450e-9e7a-f9f8ef1edeae", APIVersion:"v1", ResourceVersion:"459", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-327416_4ab96349-6c93-48a9-a46c-9352df17b5af became leader
	W1003 19:39:20.201698       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:39:20.216417       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1003 19:39:20.260891       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-327416_4ab96349-6c93-48a9-a46c-9352df17b5af!
	W1003 19:39:22.220267       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:39:22.229131       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:39:24.232323       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:39:24.237410       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:39:26.241848       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:39:26.250937       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:39:28.255760       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:39:28.261373       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:39:30.264604       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:39:30.367798       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:39:32.372959       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:39:32.378836       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:39:34.382838       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:39:34.388541       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-327416 -n embed-certs-327416
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-327416 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.71s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (6.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-327416 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p embed-certs-327416 --alsologtostderr -v=1: exit status 80 (1.899302793s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-327416 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 19:40:52.801113  488958 out.go:360] Setting OutFile to fd 1 ...
	I1003 19:40:52.801290  488958 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 19:40:52.801302  488958 out.go:374] Setting ErrFile to fd 2...
	I1003 19:40:52.801308  488958 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 19:40:52.801582  488958 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-284583/.minikube/bin
	I1003 19:40:52.801872  488958 out.go:368] Setting JSON to false
	I1003 19:40:52.801912  488958 mustload.go:65] Loading cluster: embed-certs-327416
	I1003 19:40:52.802312  488958 config.go:182] Loaded profile config "embed-certs-327416": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 19:40:52.802792  488958 cli_runner.go:164] Run: docker container inspect embed-certs-327416 --format={{.State.Status}}
	I1003 19:40:52.820387  488958 host.go:66] Checking if "embed-certs-327416" exists ...
	I1003 19:40:52.820683  488958 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 19:40:52.883855  488958 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-03 19:40:52.869170868 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1003 19:40:52.884481  488958 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-327416 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1003 19:40:52.888334  488958 out.go:179] * Pausing node embed-certs-327416 ... 
	I1003 19:40:52.889857  488958 host.go:66] Checking if "embed-certs-327416" exists ...
	I1003 19:40:52.890182  488958 ssh_runner.go:195] Run: systemctl --version
	I1003 19:40:52.890229  488958 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-327416
	I1003 19:40:52.908210  488958 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/embed-certs-327416/id_rsa Username:docker}
	I1003 19:40:53.003647  488958 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 19:40:53.036489  488958 pause.go:51] kubelet running: true
	I1003 19:40:53.036573  488958 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1003 19:40:53.265029  488958 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1003 19:40:53.265162  488958 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1003 19:40:53.345464  488958 cri.go:89] found id: "5e66dc6a1481b362f77729de7b87a40c80a9b3559f540b5a8bd6f55ec6c8f731"
	I1003 19:40:53.345489  488958 cri.go:89] found id: "f08f692651a4c24dbc7f5c2d01b62f4b3444fe292b2f5c83c3522aac293a2680"
	I1003 19:40:53.345495  488958 cri.go:89] found id: "e082ac152bed0226fa5fbaf16b5adae1367f37de196398b9aa393d4b2682c3bb"
	I1003 19:40:53.345500  488958 cri.go:89] found id: "feab4d04b3ff4dcec9c7a34ced7bd215e07b33afff0b593771ec98a30d1421e9"
	I1003 19:40:53.345503  488958 cri.go:89] found id: "a099b0263e1ca1acdf33e1af73c68951785e54c0ba213fdfbcb1bb8d81e98644"
	I1003 19:40:53.345508  488958 cri.go:89] found id: "7251d8be4bbe1feadb8d7586aad5c359dbd66fd31d01b439cbe4b247e9edacb9"
	I1003 19:40:53.345511  488958 cri.go:89] found id: "d175d98dcd2f4aad68e57c312506a537fcec4add7ab32b2ffa4c3126efd41601"
	I1003 19:40:53.345514  488958 cri.go:89] found id: "58e88d8c2849a5437eb7767eb255d61ad53372f61e98f7b15fba814d13e38b12"
	I1003 19:40:53.345517  488958 cri.go:89] found id: "0c6c5a56f754c48cee635b6a3f179cd14335b49d4105c542ea8de2a52f7a1289"
	I1003 19:40:53.345524  488958 cri.go:89] found id: "a738125ff91fa9557f957b47e040af0afc4e0c20eba8d133f0a7232ec66b0d66"
	I1003 19:40:53.345527  488958 cri.go:89] found id: "a789d122b33c055f37ef455982128473a2a103a67ed53fffdb7d04275c3e1c56"
	I1003 19:40:53.345531  488958 cri.go:89] found id: ""
	I1003 19:40:53.345593  488958 ssh_runner.go:195] Run: sudo runc list -f json
	I1003 19:40:53.360832  488958 retry.go:31] will retry after 273.9894ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-03T19:40:53Z" level=error msg="open /run/runc: no such file or directory"
	I1003 19:40:53.635348  488958 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 19:40:53.648904  488958 pause.go:51] kubelet running: false
	I1003 19:40:53.648970  488958 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1003 19:40:53.826980  488958 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1003 19:40:53.827102  488958 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1003 19:40:53.904819  488958 cri.go:89] found id: "5e66dc6a1481b362f77729de7b87a40c80a9b3559f540b5a8bd6f55ec6c8f731"
	I1003 19:40:53.904845  488958 cri.go:89] found id: "f08f692651a4c24dbc7f5c2d01b62f4b3444fe292b2f5c83c3522aac293a2680"
	I1003 19:40:53.904851  488958 cri.go:89] found id: "e082ac152bed0226fa5fbaf16b5adae1367f37de196398b9aa393d4b2682c3bb"
	I1003 19:40:53.904855  488958 cri.go:89] found id: "feab4d04b3ff4dcec9c7a34ced7bd215e07b33afff0b593771ec98a30d1421e9"
	I1003 19:40:53.904858  488958 cri.go:89] found id: "a099b0263e1ca1acdf33e1af73c68951785e54c0ba213fdfbcb1bb8d81e98644"
	I1003 19:40:53.904862  488958 cri.go:89] found id: "7251d8be4bbe1feadb8d7586aad5c359dbd66fd31d01b439cbe4b247e9edacb9"
	I1003 19:40:53.904866  488958 cri.go:89] found id: "d175d98dcd2f4aad68e57c312506a537fcec4add7ab32b2ffa4c3126efd41601"
	I1003 19:40:53.904896  488958 cri.go:89] found id: "58e88d8c2849a5437eb7767eb255d61ad53372f61e98f7b15fba814d13e38b12"
	I1003 19:40:53.904908  488958 cri.go:89] found id: "0c6c5a56f754c48cee635b6a3f179cd14335b49d4105c542ea8de2a52f7a1289"
	I1003 19:40:53.904916  488958 cri.go:89] found id: "a738125ff91fa9557f957b47e040af0afc4e0c20eba8d133f0a7232ec66b0d66"
	I1003 19:40:53.904920  488958 cri.go:89] found id: "a789d122b33c055f37ef455982128473a2a103a67ed53fffdb7d04275c3e1c56"
	I1003 19:40:53.904924  488958 cri.go:89] found id: ""
	I1003 19:40:53.904996  488958 ssh_runner.go:195] Run: sudo runc list -f json
	I1003 19:40:53.916180  488958 retry.go:31] will retry after 315.136856ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-03T19:40:53Z" level=error msg="open /run/runc: no such file or directory"
	I1003 19:40:54.231549  488958 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 19:40:54.244894  488958 pause.go:51] kubelet running: false
	I1003 19:40:54.244974  488958 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1003 19:40:54.486770  488958 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1003 19:40:54.486846  488958 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1003 19:40:54.587715  488958 cri.go:89] found id: "5e66dc6a1481b362f77729de7b87a40c80a9b3559f540b5a8bd6f55ec6c8f731"
	I1003 19:40:54.587736  488958 cri.go:89] found id: "f08f692651a4c24dbc7f5c2d01b62f4b3444fe292b2f5c83c3522aac293a2680"
	I1003 19:40:54.587741  488958 cri.go:89] found id: "e082ac152bed0226fa5fbaf16b5adae1367f37de196398b9aa393d4b2682c3bb"
	I1003 19:40:54.587744  488958 cri.go:89] found id: "feab4d04b3ff4dcec9c7a34ced7bd215e07b33afff0b593771ec98a30d1421e9"
	I1003 19:40:54.587748  488958 cri.go:89] found id: "a099b0263e1ca1acdf33e1af73c68951785e54c0ba213fdfbcb1bb8d81e98644"
	I1003 19:40:54.587751  488958 cri.go:89] found id: "7251d8be4bbe1feadb8d7586aad5c359dbd66fd31d01b439cbe4b247e9edacb9"
	I1003 19:40:54.587754  488958 cri.go:89] found id: "d175d98dcd2f4aad68e57c312506a537fcec4add7ab32b2ffa4c3126efd41601"
	I1003 19:40:54.587757  488958 cri.go:89] found id: "58e88d8c2849a5437eb7767eb255d61ad53372f61e98f7b15fba814d13e38b12"
	I1003 19:40:54.587760  488958 cri.go:89] found id: "0c6c5a56f754c48cee635b6a3f179cd14335b49d4105c542ea8de2a52f7a1289"
	I1003 19:40:54.587768  488958 cri.go:89] found id: "a738125ff91fa9557f957b47e040af0afc4e0c20eba8d133f0a7232ec66b0d66"
	I1003 19:40:54.587771  488958 cri.go:89] found id: "a789d122b33c055f37ef455982128473a2a103a67ed53fffdb7d04275c3e1c56"
	I1003 19:40:54.587775  488958 cri.go:89] found id: ""
	I1003 19:40:54.587826  488958 ssh_runner.go:195] Run: sudo runc list -f json
	I1003 19:40:54.609377  488958 out.go:203] 
	W1003 19:40:54.612621  488958 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-03T19:40:54Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-03T19:40:54Z" level=error msg="open /run/runc: no such file or directory"
	
	W1003 19:40:54.612640  488958 out.go:285] * 
	* 
	W1003 19:40:54.622535  488958 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 19:40:54.627741  488958 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p embed-certs-327416 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-327416
helpers_test.go:243: (dbg) docker inspect embed-certs-327416:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7044b9fbdfefb3fd8bce7381adae2abdcd93d79fb8452cc72e2f26e58ccd8222",
	        "Created": "2025-10-03T19:37:58.41651583Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 486220,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-03T19:39:48.806953605Z",
	            "FinishedAt": "2025-10-03T19:39:47.763628031Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/7044b9fbdfefb3fd8bce7381adae2abdcd93d79fb8452cc72e2f26e58ccd8222/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7044b9fbdfefb3fd8bce7381adae2abdcd93d79fb8452cc72e2f26e58ccd8222/hostname",
	        "HostsPath": "/var/lib/docker/containers/7044b9fbdfefb3fd8bce7381adae2abdcd93d79fb8452cc72e2f26e58ccd8222/hosts",
	        "LogPath": "/var/lib/docker/containers/7044b9fbdfefb3fd8bce7381adae2abdcd93d79fb8452cc72e2f26e58ccd8222/7044b9fbdfefb3fd8bce7381adae2abdcd93d79fb8452cc72e2f26e58ccd8222-json.log",
	        "Name": "/embed-certs-327416",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-327416:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-327416",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7044b9fbdfefb3fd8bce7381adae2abdcd93d79fb8452cc72e2f26e58ccd8222",
	                "LowerDir": "/var/lib/docker/overlay2/6d78601b2f0a3bddd2f05c4f4ab25e1cdd9b0b6f0850c52b546e1909596049d0-init/diff:/var/lib/docker/overlay2/87b205803817b0b71a214d995ab7e10a92033bbf72d76d6e052f1d21ccecb313/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6d78601b2f0a3bddd2f05c4f4ab25e1cdd9b0b6f0850c52b546e1909596049d0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6d78601b2f0a3bddd2f05c4f4ab25e1cdd9b0b6f0850c52b546e1909596049d0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6d78601b2f0a3bddd2f05c4f4ab25e1cdd9b0b6f0850c52b546e1909596049d0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-327416",
	                "Source": "/var/lib/docker/volumes/embed-certs-327416/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-327416",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-327416",
	                "name.minikube.sigs.k8s.io": "embed-certs-327416",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5165a1bbcf39ee90384661feb28d1fdb04ed8d0177377d647e91922cec0c0d98",
	            "SandboxKey": "/var/run/docker/netns/5165a1bbcf39",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33448"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33449"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33452"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33450"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33451"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-327416": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "96:a3:14:c5:c5:dc",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "438dfcb24f609c25637ab5cf83b5d0d8692bb34419c32369c46f82797d6523d1",
	                    "EndpointID": "cd81684c49c278d681da1feb21c437776a458070e42b2ac96a692e3d08c6914c",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-327416",
	                        "7044b9fbdfef"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-327416 -n embed-certs-327416
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-327416 -n embed-certs-327416: exit status 2 (402.317376ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-327416 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-327416 logs -n 25: (1.30255358s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p cert-expiration-324520                                                                                                                                                                                                                     │ cert-expiration-324520       │ jenkins │ v1.37.0 │ 03 Oct 25 19:36 UTC │ 03 Oct 25 19:36 UTC │
	│ start   │ -p no-preload-643397 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-643397            │ jenkins │ v1.37.0 │ 03 Oct 25 19:36 UTC │ 03 Oct 25 19:37 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-174543 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-174543       │ jenkins │ v1.37.0 │ 03 Oct 25 19:36 UTC │ 03 Oct 25 19:36 UTC │
	│ start   │ -p old-k8s-version-174543 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-174543       │ jenkins │ v1.37.0 │ 03 Oct 25 19:36 UTC │ 03 Oct 25 19:37 UTC │
	│ image   │ old-k8s-version-174543 image list --format=json                                                                                                                                                                                               │ old-k8s-version-174543       │ jenkins │ v1.37.0 │ 03 Oct 25 19:37 UTC │ 03 Oct 25 19:37 UTC │
	│ pause   │ -p old-k8s-version-174543 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-174543       │ jenkins │ v1.37.0 │ 03 Oct 25 19:37 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-643397 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-643397            │ jenkins │ v1.37.0 │ 03 Oct 25 19:37 UTC │                     │
	│ stop    │ -p no-preload-643397 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-643397            │ jenkins │ v1.37.0 │ 03 Oct 25 19:37 UTC │ 03 Oct 25 19:38 UTC │
	│ delete  │ -p old-k8s-version-174543                                                                                                                                                                                                                     │ old-k8s-version-174543       │ jenkins │ v1.37.0 │ 03 Oct 25 19:37 UTC │ 03 Oct 25 19:37 UTC │
	│ delete  │ -p old-k8s-version-174543                                                                                                                                                                                                                     │ old-k8s-version-174543       │ jenkins │ v1.37.0 │ 03 Oct 25 19:37 UTC │ 03 Oct 25 19:37 UTC │
	│ start   │ -p embed-certs-327416 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-327416           │ jenkins │ v1.37.0 │ 03 Oct 25 19:37 UTC │ 03 Oct 25 19:39 UTC │
	│ addons  │ enable dashboard -p no-preload-643397 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-643397            │ jenkins │ v1.37.0 │ 03 Oct 25 19:38 UTC │ 03 Oct 25 19:38 UTC │
	│ start   │ -p no-preload-643397 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-643397            │ jenkins │ v1.37.0 │ 03 Oct 25 19:38 UTC │ 03 Oct 25 19:39 UTC │
	│ image   │ no-preload-643397 image list --format=json                                                                                                                                                                                                    │ no-preload-643397            │ jenkins │ v1.37.0 │ 03 Oct 25 19:39 UTC │ 03 Oct 25 19:39 UTC │
	│ pause   │ -p no-preload-643397 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-643397            │ jenkins │ v1.37.0 │ 03 Oct 25 19:39 UTC │                     │
	│ delete  │ -p no-preload-643397                                                                                                                                                                                                                          │ no-preload-643397            │ jenkins │ v1.37.0 │ 03 Oct 25 19:39 UTC │ 03 Oct 25 19:39 UTC │
	│ delete  │ -p no-preload-643397                                                                                                                                                                                                                          │ no-preload-643397            │ jenkins │ v1.37.0 │ 03 Oct 25 19:39 UTC │ 03 Oct 25 19:39 UTC │
	│ delete  │ -p disable-driver-mounts-839513                                                                                                                                                                                                               │ disable-driver-mounts-839513 │ jenkins │ v1.37.0 │ 03 Oct 25 19:39 UTC │ 03 Oct 25 19:39 UTC │
	│ start   │ -p default-k8s-diff-port-842797 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-842797 │ jenkins │ v1.37.0 │ 03 Oct 25 19:39 UTC │ 03 Oct 25 19:40 UTC │
	│ addons  │ enable metrics-server -p embed-certs-327416 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-327416           │ jenkins │ v1.37.0 │ 03 Oct 25 19:39 UTC │                     │
	│ stop    │ -p embed-certs-327416 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-327416           │ jenkins │ v1.37.0 │ 03 Oct 25 19:39 UTC │ 03 Oct 25 19:39 UTC │
	│ addons  │ enable dashboard -p embed-certs-327416 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-327416           │ jenkins │ v1.37.0 │ 03 Oct 25 19:39 UTC │ 03 Oct 25 19:39 UTC │
	│ start   │ -p embed-certs-327416 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-327416           │ jenkins │ v1.37.0 │ 03 Oct 25 19:39 UTC │ 03 Oct 25 19:40 UTC │
	│ image   │ embed-certs-327416 image list --format=json                                                                                                                                                                                                   │ embed-certs-327416           │ jenkins │ v1.37.0 │ 03 Oct 25 19:40 UTC │ 03 Oct 25 19:40 UTC │
	│ pause   │ -p embed-certs-327416 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-327416           │ jenkins │ v1.37.0 │ 03 Oct 25 19:40 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/03 19:39:48
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 19:39:48.406056  486090 out.go:360] Setting OutFile to fd 1 ...
	I1003 19:39:48.406272  486090 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 19:39:48.406299  486090 out.go:374] Setting ErrFile to fd 2...
	I1003 19:39:48.406317  486090 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 19:39:48.406630  486090 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-284583/.minikube/bin
	I1003 19:39:48.408003  486090 out.go:368] Setting JSON to false
	I1003 19:39:48.409020  486090 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8540,"bootTime":1759511849,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1003 19:39:48.409111  486090 start.go:140] virtualization:  
	I1003 19:39:48.414161  486090 out.go:179] * [embed-certs-327416] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1003 19:39:48.417360  486090 notify.go:220] Checking for updates...
	I1003 19:39:48.420766  486090 out.go:179]   - MINIKUBE_LOCATION=21625
	I1003 19:39:48.423562  486090 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 19:39:48.426471  486090 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21625-284583/kubeconfig
	I1003 19:39:48.429299  486090 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-284583/.minikube
	I1003 19:39:48.432144  486090 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1003 19:39:48.434908  486090 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 19:39:48.438199  486090 config.go:182] Loaded profile config "embed-certs-327416": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 19:39:48.438763  486090 driver.go:421] Setting default libvirt URI to qemu:///system
	I1003 19:39:48.468865  486090 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1003 19:39:48.468982  486090 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 19:39:48.574762  486090 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:43 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-03 19:39:48.564624002 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1003 19:39:48.574873  486090 docker.go:318] overlay module found
	I1003 19:39:48.577891  486090 out.go:179] * Using the docker driver based on existing profile
	I1003 19:39:48.580702  486090 start.go:304] selected driver: docker
	I1003 19:39:48.580794  486090 start.go:924] validating driver "docker" against &{Name:embed-certs-327416 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-327416 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 19:39:48.580912  486090 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 19:39:48.581602  486090 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 19:39:48.690084  486090 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:43 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-03 19:39:48.680014635 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1003 19:39:48.690427  486090 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 19:39:48.690462  486090 cni.go:84] Creating CNI manager for ""
	I1003 19:39:48.690528  486090 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 19:39:48.690573  486090 start.go:348] cluster config:
	{Name:embed-certs-327416 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-327416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 19:39:48.695599  486090 out.go:179] * Starting "embed-certs-327416" primary control-plane node in "embed-certs-327416" cluster
	I1003 19:39:48.698376  486090 cache.go:123] Beginning downloading kic base image for docker with crio
	I1003 19:39:48.701351  486090 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1003 19:39:48.704158  486090 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 19:39:48.704218  486090 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21625-284583/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1003 19:39:48.704233  486090 cache.go:58] Caching tarball of preloaded images
	I1003 19:39:48.704331  486090 preload.go:233] Found /home/jenkins/minikube-integration/21625-284583/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1003 19:39:48.704347  486090 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1003 19:39:48.704461  486090 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/embed-certs-327416/config.json ...
	I1003 19:39:48.704686  486090 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1003 19:39:48.732663  486090 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1003 19:39:48.732691  486090 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1003 19:39:48.732705  486090 cache.go:232] Successfully downloaded all kic artifacts
	I1003 19:39:48.732781  486090 start.go:360] acquireMachinesLock for embed-certs-327416: {Name:mk5dc758d01b8c5f84eccb23a8f0d09c618d844f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 19:39:48.732861  486090 start.go:364] duration metric: took 46.983µs to acquireMachinesLock for "embed-certs-327416"
	I1003 19:39:48.732887  486090 start.go:96] Skipping create...Using existing machine configuration
	I1003 19:39:48.732898  486090 fix.go:54] fixHost starting: 
	I1003 19:39:48.733155  486090 cli_runner.go:164] Run: docker container inspect embed-certs-327416 --format={{.State.Status}}
	I1003 19:39:48.759430  486090 fix.go:112] recreateIfNeeded on embed-certs-327416: state=Stopped err=<nil>
	W1003 19:39:48.759482  486090 fix.go:138] unexpected machine state, will restart: <nil>
	I1003 19:39:46.683083  483467 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1003 19:39:47.215411  483467 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1003 19:39:47.871927  483467 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1003 19:39:47.872204  483467 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1003 19:39:48.377112  483467 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1003 19:39:48.713076  483467 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1003 19:39:49.977727  483467 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1003 19:39:51.424340  483467 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1003 19:39:51.621915  483467 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1003 19:39:51.622717  483467 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1003 19:39:51.625492  483467 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1003 19:39:48.762885  486090 out.go:252] * Restarting existing docker container for "embed-certs-327416" ...
	I1003 19:39:48.762994  486090 cli_runner.go:164] Run: docker start embed-certs-327416
	I1003 19:39:49.052325  486090 cli_runner.go:164] Run: docker container inspect embed-certs-327416 --format={{.State.Status}}
	I1003 19:39:49.084435  486090 kic.go:430] container "embed-certs-327416" state is running.
	I1003 19:39:49.084868  486090 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-327416
	I1003 19:39:49.114499  486090 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/embed-certs-327416/config.json ...
	I1003 19:39:49.114727  486090 machine.go:93] provisionDockerMachine start ...
	I1003 19:39:49.114787  486090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-327416
	I1003 19:39:49.145479  486090 main.go:141] libmachine: Using SSH client type: native
	I1003 19:39:49.145801  486090 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1003 19:39:49.145817  486090 main.go:141] libmachine: About to run SSH command:
	hostname
	I1003 19:39:49.148254  486090 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1003 19:39:52.301067  486090 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-327416
	
	I1003 19:39:52.301112  486090 ubuntu.go:182] provisioning hostname "embed-certs-327416"
	I1003 19:39:52.301199  486090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-327416
	I1003 19:39:52.324448  486090 main.go:141] libmachine: Using SSH client type: native
	I1003 19:39:52.324837  486090 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1003 19:39:52.324852  486090 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-327416 && echo "embed-certs-327416" | sudo tee /etc/hostname
	I1003 19:39:52.487798  486090 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-327416
	
	I1003 19:39:52.487940  486090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-327416
	I1003 19:39:52.514479  486090 main.go:141] libmachine: Using SSH client type: native
	I1003 19:39:52.514791  486090 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1003 19:39:52.514808  486090 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-327416' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-327416/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-327416' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 19:39:52.653185  486090 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 19:39:52.653276  486090 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21625-284583/.minikube CaCertPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21625-284583/.minikube}
	I1003 19:39:52.653328  486090 ubuntu.go:190] setting up certificates
	I1003 19:39:52.653364  486090 provision.go:84] configureAuth start
	I1003 19:39:52.653456  486090 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-327416
	I1003 19:39:52.678375  486090 provision.go:143] copyHostCerts
	I1003 19:39:52.678440  486090 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-284583/.minikube/key.pem, removing ...
	I1003 19:39:52.678458  486090 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-284583/.minikube/key.pem
	I1003 19:39:52.678535  486090 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21625-284583/.minikube/key.pem (1675 bytes)
	I1003 19:39:52.678640  486090 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-284583/.minikube/ca.pem, removing ...
	I1003 19:39:52.678645  486090 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-284583/.minikube/ca.pem
	I1003 19:39:52.678677  486090 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21625-284583/.minikube/ca.pem (1082 bytes)
	I1003 19:39:52.678741  486090 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-284583/.minikube/cert.pem, removing ...
	I1003 19:39:52.678746  486090 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-284583/.minikube/cert.pem
	I1003 19:39:52.678772  486090 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21625-284583/.minikube/cert.pem (1123 bytes)
	I1003 19:39:52.678828  486090 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21625-284583/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca-key.pem org=jenkins.embed-certs-327416 san=[127.0.0.1 192.168.85.2 embed-certs-327416 localhost minikube]
	I1003 19:39:51.629007  483467 out.go:252]   - Booting up control plane ...
	I1003 19:39:51.629116  483467 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1003 19:39:51.629198  483467 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1003 19:39:51.629269  483467 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1003 19:39:51.644565  483467 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1003 19:39:51.644972  483467 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1003 19:39:51.653263  483467 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1003 19:39:51.653582  483467 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1003 19:39:51.653644  483467 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1003 19:39:51.783065  483467 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1003 19:39:51.783191  483467 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1003 19:39:53.787581  483467 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 2.001752936s
	I1003 19:39:53.787705  483467 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1003 19:39:53.787791  483467 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8444/livez
	I1003 19:39:53.787885  483467 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1003 19:39:53.787968  483467 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1003 19:39:53.585518  486090 provision.go:177] copyRemoteCerts
	I1003 19:39:53.585643  486090 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 19:39:53.585729  486090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-327416
	I1003 19:39:53.604907  486090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/embed-certs-327416/id_rsa Username:docker}
	I1003 19:39:53.704464  486090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 19:39:53.726033  486090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1003 19:39:53.744396  486090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1003 19:39:53.762518  486090 provision.go:87] duration metric: took 1.109120098s to configureAuth
	I1003 19:39:53.762544  486090 ubuntu.go:206] setting minikube options for container-runtime
	I1003 19:39:53.762724  486090 config.go:182] Loaded profile config "embed-certs-327416": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 19:39:53.762831  486090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-327416
	I1003 19:39:53.780425  486090 main.go:141] libmachine: Using SSH client type: native
	I1003 19:39:53.780782  486090 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1003 19:39:53.780806  486090 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1003 19:39:54.178466  486090 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1003 19:39:54.178541  486090 machine.go:96] duration metric: took 5.063804675s to provisionDockerMachine
	I1003 19:39:54.178585  486090 start.go:293] postStartSetup for "embed-certs-327416" (driver="docker")
	I1003 19:39:54.178623  486090 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 19:39:54.178728  486090 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 19:39:54.178799  486090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-327416
	I1003 19:39:54.202940  486090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/embed-certs-327416/id_rsa Username:docker}
	I1003 19:39:54.322166  486090 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 19:39:54.325643  486090 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1003 19:39:54.325718  486090 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1003 19:39:54.325743  486090 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-284583/.minikube/addons for local assets ...
	I1003 19:39:54.325828  486090 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-284583/.minikube/files for local assets ...
	I1003 19:39:54.325965  486090 filesync.go:149] local asset: /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem -> 2864342.pem in /etc/ssl/certs
	I1003 19:39:54.326115  486090 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 19:39:54.338999  486090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem --> /etc/ssl/certs/2864342.pem (1708 bytes)
	I1003 19:39:54.366124  486090 start.go:296] duration metric: took 187.497578ms for postStartSetup
	I1003 19:39:54.366211  486090 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 19:39:54.366257  486090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-327416
	I1003 19:39:54.403022  486090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/embed-certs-327416/id_rsa Username:docker}
	I1003 19:39:54.506155  486090 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1003 19:39:54.514166  486090 fix.go:56] duration metric: took 5.781259919s for fixHost
	I1003 19:39:54.514192  486090 start.go:83] releasing machines lock for "embed-certs-327416", held for 5.781315928s
	I1003 19:39:54.514274  486090 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-327416
	I1003 19:39:54.546329  486090 ssh_runner.go:195] Run: cat /version.json
	I1003 19:39:54.546384  486090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-327416
	I1003 19:39:54.546646  486090 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 19:39:54.546702  486090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-327416
	I1003 19:39:54.575481  486090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/embed-certs-327416/id_rsa Username:docker}
	I1003 19:39:54.596971  486090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/embed-certs-327416/id_rsa Username:docker}
	I1003 19:39:54.688394  486090 ssh_runner.go:195] Run: systemctl --version
	I1003 19:39:54.806335  486090 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1003 19:39:54.885166  486090 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1003 19:39:54.889961  486090 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 19:39:54.890031  486090 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 19:39:54.900767  486090 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1003 19:39:54.900792  486090 start.go:495] detecting cgroup driver to use...
	I1003 19:39:54.900823  486090 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1003 19:39:54.900878  486090 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 19:39:54.922992  486090 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 19:39:54.945562  486090 docker.go:218] disabling cri-docker service (if available) ...
	I1003 19:39:54.945624  486090 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1003 19:39:54.965809  486090 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1003 19:39:54.984754  486090 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1003 19:39:55.213902  486090 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1003 19:39:55.431424  486090 docker.go:234] disabling docker service ...
	I1003 19:39:55.431504  486090 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1003 19:39:55.453523  486090 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1003 19:39:55.466989  486090 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1003 19:39:55.658467  486090 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1003 19:39:55.840577  486090 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1003 19:39:55.866618  486090 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 19:39:55.890570  486090 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1003 19:39:55.890719  486090 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:39:55.904611  486090 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1003 19:39:55.904773  486090 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:39:55.921983  486090 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:39:55.937691  486090 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:39:55.950924  486090 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 19:39:55.965449  486090 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:39:55.978736  486090 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:39:55.994168  486090 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:39:56.010522  486090 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 19:39:56.026080  486090 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 19:39:56.046911  486090 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 19:39:56.268510  486090 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1003 19:39:56.482429  486090 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1003 19:39:56.482495  486090 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1003 19:39:56.487915  486090 start.go:563] Will wait 60s for crictl version
	I1003 19:39:56.488031  486090 ssh_runner.go:195] Run: which crictl
	I1003 19:39:56.496915  486090 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1003 19:39:56.545778  486090 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1003 19:39:56.545860  486090 ssh_runner.go:195] Run: crio --version
	I1003 19:39:56.590615  486090 ssh_runner.go:195] Run: crio --version
	I1003 19:39:56.645731  486090 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1003 19:39:56.648492  486090 cli_runner.go:164] Run: docker network inspect embed-certs-327416 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 19:39:56.668953  486090 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1003 19:39:56.672973  486090 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 19:39:56.686588  486090 kubeadm.go:883] updating cluster {Name:embed-certs-327416 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-327416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1003 19:39:56.686711  486090 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 19:39:56.686761  486090 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 19:39:56.744749  486090 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 19:39:56.744773  486090 crio.go:433] Images already preloaded, skipping extraction
	I1003 19:39:56.744827  486090 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 19:39:56.793917  486090 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 19:39:56.793941  486090 cache_images.go:85] Images are preloaded, skipping loading
	I1003 19:39:56.793949  486090 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1003 19:39:56.794052  486090 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-327416 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-327416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1003 19:39:56.794141  486090 ssh_runner.go:195] Run: crio config
	I1003 19:39:56.902732  486090 cni.go:84] Creating CNI manager for ""
	I1003 19:39:56.902755  486090 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 19:39:56.902775  486090 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1003 19:39:56.902797  486090 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-327416 NodeName:embed-certs-327416 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1003 19:39:56.902923  486090 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-327416"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 19:39:56.903001  486090 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1003 19:39:56.923431  486090 binaries.go:44] Found k8s binaries, skipping transfer
	I1003 19:39:56.923502  486090 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1003 19:39:56.933656  486090 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1003 19:39:56.954918  486090 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 19:39:56.990889  486090 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1003 19:39:57.007988  486090 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1003 19:39:57.012607  486090 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 19:39:57.031100  486090 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 19:39:57.219401  486090 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 19:39:57.236340  486090 certs.go:69] Setting up /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/embed-certs-327416 for IP: 192.168.85.2
	I1003 19:39:57.236370  486090 certs.go:195] generating shared ca certs ...
	I1003 19:39:57.236386  486090 certs.go:227] acquiring lock for ca certs: {Name:mk5a10e6c921326e9c211447576eaeb893259ba7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:39:57.236537  486090 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21625-284583/.minikube/ca.key
	I1003 19:39:57.236585  486090 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.key
	I1003 19:39:57.236605  486090 certs.go:257] generating profile certs ...
	I1003 19:39:57.236708  486090 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/embed-certs-327416/client.key
	I1003 19:39:57.236794  486090 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/embed-certs-327416/apiserver.key.00090923
	I1003 19:39:57.236851  486090 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/embed-certs-327416/proxy-client.key
	I1003 19:39:57.236993  486090 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/286434.pem (1338 bytes)
	W1003 19:39:57.237029  486090 certs.go:480] ignoring /home/jenkins/minikube-integration/21625-284583/.minikube/certs/286434_empty.pem, impossibly tiny 0 bytes
	I1003 19:39:57.237049  486090 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 19:39:57.237080  486090 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem (1082 bytes)
	I1003 19:39:57.237128  486090 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem (1123 bytes)
	I1003 19:39:57.237159  486090 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/key.pem (1675 bytes)
	I1003 19:39:57.237214  486090 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem (1708 bytes)
	I1003 19:39:57.237861  486090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 19:39:57.277658  486090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1003 19:39:57.334048  486090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 19:39:57.391162  486090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1003 19:39:57.431674  486090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/embed-certs-327416/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1003 19:39:57.465806  486090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/embed-certs-327416/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1003 19:39:57.485328  486090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/embed-certs-327416/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 19:39:57.527173  486090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/embed-certs-327416/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1003 19:39:57.586733  486090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 19:39:57.653883  486090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/certs/286434.pem --> /usr/share/ca-certificates/286434.pem (1338 bytes)
	I1003 19:39:57.684024  486090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem --> /usr/share/ca-certificates/2864342.pem (1708 bytes)
	I1003 19:39:57.713347  486090 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 19:39:57.738914  486090 ssh_runner.go:195] Run: openssl version
	I1003 19:39:57.749213  486090 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 19:39:57.761879  486090 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 19:39:57.765847  486090 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  3 18:27 /usr/share/ca-certificates/minikubeCA.pem
	I1003 19:39:57.765929  486090 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 19:39:57.808221  486090 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 19:39:57.818722  486090 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/286434.pem && ln -fs /usr/share/ca-certificates/286434.pem /etc/ssl/certs/286434.pem"
	I1003 19:39:57.829386  486090 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/286434.pem
	I1003 19:39:57.833463  486090 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  3 18:34 /usr/share/ca-certificates/286434.pem
	I1003 19:39:57.833537  486090 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/286434.pem
	I1003 19:39:57.879415  486090 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/286434.pem /etc/ssl/certs/51391683.0"
	I1003 19:39:57.893919  486090 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2864342.pem && ln -fs /usr/share/ca-certificates/2864342.pem /etc/ssl/certs/2864342.pem"
	I1003 19:39:57.902841  486090 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2864342.pem
	I1003 19:39:57.911381  486090 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  3 18:34 /usr/share/ca-certificates/2864342.pem
	I1003 19:39:57.911479  486090 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2864342.pem
	I1003 19:39:57.970764  486090 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2864342.pem /etc/ssl/certs/3ec20f2e.0"
	I1003 19:39:57.983014  486090 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 19:39:57.987442  486090 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1003 19:39:58.074463  486090 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1003 19:39:58.165704  486090 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1003 19:39:58.226081  486090 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1003 19:39:58.401526  486090 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1003 19:39:58.581476  486090 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1003 19:39:58.733586  486090 kubeadm.go:400] StartCluster: {Name:embed-certs-327416 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-327416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 19:39:58.733686  486090 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1003 19:39:58.733771  486090 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1003 19:39:58.833897  486090 cri.go:89] found id: "7251d8be4bbe1feadb8d7586aad5c359dbd66fd31d01b439cbe4b247e9edacb9"
	I1003 19:39:58.833932  486090 cri.go:89] found id: "d175d98dcd2f4aad68e57c312506a537fcec4add7ab32b2ffa4c3126efd41601"
	I1003 19:39:58.833941  486090 cri.go:89] found id: "58e88d8c2849a5437eb7767eb255d61ad53372f61e98f7b15fba814d13e38b12"
	I1003 19:39:58.833945  486090 cri.go:89] found id: "0c6c5a56f754c48cee635b6a3f179cd14335b49d4105c542ea8de2a52f7a1289"
	I1003 19:39:58.833948  486090 cri.go:89] found id: ""
	I1003 19:39:58.834021  486090 ssh_runner.go:195] Run: sudo runc list -f json
	W1003 19:39:58.862452  486090 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-03T19:39:58Z" level=error msg="open /run/runc: no such file or directory"
	I1003 19:39:58.862563  486090 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 19:39:58.881833  486090 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1003 19:39:58.881877  486090 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1003 19:39:58.881934  486090 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1003 19:39:58.906826  486090 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1003 19:39:58.907284  486090 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-327416" does not appear in /home/jenkins/minikube-integration/21625-284583/kubeconfig
	I1003 19:39:58.907438  486090 kubeconfig.go:62] /home/jenkins/minikube-integration/21625-284583/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-327416" cluster setting kubeconfig missing "embed-certs-327416" context setting]
	I1003 19:39:58.907756  486090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/kubeconfig: {Name:mkc1323fd87f4a78231a26d2dab0dff7feecf1e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:39:58.909428  486090 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1003 19:39:58.935594  486090 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1003 19:39:58.935643  486090 kubeadm.go:601] duration metric: took 53.7523ms to restartPrimaryControlPlane
	I1003 19:39:58.935653  486090 kubeadm.go:402] duration metric: took 202.077841ms to StartCluster
	I1003 19:39:58.935668  486090 settings.go:142] acquiring lock: {Name:mkc95577dbc448e3409dfa2b5e53a3a1327cb451 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:39:58.935742  486090 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21625-284583/kubeconfig
	I1003 19:39:58.936816  486090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/kubeconfig: {Name:mkc1323fd87f4a78231a26d2dab0dff7feecf1e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:39:58.937055  486090 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1003 19:39:58.937428  486090 config.go:182] Loaded profile config "embed-certs-327416": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 19:39:58.937414  486090 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1003 19:39:58.937540  486090 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-327416"
	I1003 19:39:58.937555  486090 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-327416"
	W1003 19:39:58.937563  486090 addons.go:247] addon storage-provisioner should already be in state true
	I1003 19:39:58.937587  486090 host.go:66] Checking if "embed-certs-327416" exists ...
	I1003 19:39:58.938039  486090 cli_runner.go:164] Run: docker container inspect embed-certs-327416 --format={{.State.Status}}
	I1003 19:39:58.938228  486090 addons.go:69] Setting dashboard=true in profile "embed-certs-327416"
	I1003 19:39:58.938271  486090 addons.go:238] Setting addon dashboard=true in "embed-certs-327416"
	W1003 19:39:58.938295  486090 addons.go:247] addon dashboard should already be in state true
	I1003 19:39:58.938333  486090 host.go:66] Checking if "embed-certs-327416" exists ...
	I1003 19:39:58.938551  486090 addons.go:69] Setting default-storageclass=true in profile "embed-certs-327416"
	I1003 19:39:58.938575  486090 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-327416"
	I1003 19:39:58.938822  486090 cli_runner.go:164] Run: docker container inspect embed-certs-327416 --format={{.State.Status}}
	I1003 19:39:58.938874  486090 cli_runner.go:164] Run: docker container inspect embed-certs-327416 --format={{.State.Status}}
	I1003 19:39:58.941853  486090 out.go:179] * Verifying Kubernetes components...
	I1003 19:39:58.950884  486090 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 19:39:58.986927  486090 addons.go:238] Setting addon default-storageclass=true in "embed-certs-327416"
	W1003 19:39:58.986955  486090 addons.go:247] addon default-storageclass should already be in state true
	I1003 19:39:58.986979  486090 host.go:66] Checking if "embed-certs-327416" exists ...
	I1003 19:39:58.987392  486090 cli_runner.go:164] Run: docker container inspect embed-certs-327416 --format={{.State.Status}}
	I1003 19:39:59.004130  486090 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 19:39:59.004245  486090 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1003 19:39:59.008098  486090 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1003 19:39:59.008230  486090 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 19:39:59.008244  486090 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1003 19:39:59.008323  486090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-327416
	I1003 19:39:59.012800  486090 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1003 19:39:59.012836  486090 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1003 19:39:59.012908  486090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-327416
	I1003 19:39:59.033523  486090 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1003 19:39:59.033550  486090 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1003 19:39:59.033617  486090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-327416
	I1003 19:39:59.064924  486090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/embed-certs-327416/id_rsa Username:docker}
	I1003 19:39:59.072820  486090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/embed-certs-327416/id_rsa Username:docker}
	I1003 19:39:59.084949  486090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/embed-certs-327416/id_rsa Username:docker}
	I1003 19:39:59.470877  486090 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 19:39:59.525123  486090 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1003 19:39:59.525145  486090 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1003 19:39:59.553366  486090 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1003 19:39:59.584068  486090 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 19:39:59.605388  486090 node_ready.go:35] waiting up to 6m0s for node "embed-certs-327416" to be "Ready" ...
	I1003 19:39:59.636677  486090 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1003 19:39:59.636707  486090 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1003 19:39:59.773937  486090 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1003 19:39:59.773962  486090 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1003 19:39:59.875189  486090 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1003 19:39:59.875215  486090 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1003 19:40:00.018543  486090 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1003 19:40:00.018572  486090 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1003 19:40:00.149227  486090 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1003 19:40:00.149256  486090 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1003 19:40:00.224486  486090 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1003 19:40:00.224517  486090 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1003 19:40:00.294180  486090 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1003 19:40:00.294208  486090 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1003 19:40:00.352071  486090 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1003 19:40:00.352100  486090 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1003 19:40:00.386881  486090 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1003 19:40:01.159781  483467 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 7.371754693s
	I1003 19:40:02.716838  483467 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 8.929305815s
	I1003 19:40:03.790589  483467 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 10.002833133s
	I1003 19:40:03.813779  483467 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1003 19:40:03.837694  483467 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1003 19:40:03.851798  483467 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1003 19:40:03.852007  483467 kubeadm.go:318] [mark-control-plane] Marking the node default-k8s-diff-port-842797 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1003 19:40:03.869722  483467 kubeadm.go:318] [bootstrap-token] Using token: t3ldah.09tb2yxkfmma6h8c
	I1003 19:40:03.872779  483467 out.go:252]   - Configuring RBAC rules ...
	I1003 19:40:03.872900  483467 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1003 19:40:03.884948  483467 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1003 19:40:03.901207  483467 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1003 19:40:03.908547  483467 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1003 19:40:03.913073  483467 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1003 19:40:03.917154  483467 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1003 19:40:04.197193  483467 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1003 19:40:04.757783  483467 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1003 19:40:05.209674  483467 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1003 19:40:05.211324  483467 kubeadm.go:318] 
	I1003 19:40:05.211413  483467 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1003 19:40:05.211419  483467 kubeadm.go:318] 
	I1003 19:40:05.211500  483467 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1003 19:40:05.211505  483467 kubeadm.go:318] 
	I1003 19:40:05.211531  483467 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1003 19:40:05.212032  483467 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1003 19:40:05.212101  483467 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1003 19:40:05.212107  483467 kubeadm.go:318] 
	I1003 19:40:05.212168  483467 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1003 19:40:05.212173  483467 kubeadm.go:318] 
	I1003 19:40:05.212222  483467 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1003 19:40:05.212226  483467 kubeadm.go:318] 
	I1003 19:40:05.212281  483467 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1003 19:40:05.212359  483467 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1003 19:40:05.212430  483467 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1003 19:40:05.212434  483467 kubeadm.go:318] 
	I1003 19:40:05.212810  483467 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1003 19:40:05.212971  483467 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1003 19:40:05.213004  483467 kubeadm.go:318] 
	I1003 19:40:05.213316  483467 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8444 --token t3ldah.09tb2yxkfmma6h8c \
	I1003 19:40:05.213430  483467 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:f66ff31263aa4cda6b17caa2076838d6a1918275f1c2773b90b119c0d4a4d71a \
	I1003 19:40:05.213658  483467 kubeadm.go:318] 	--control-plane 
	I1003 19:40:05.213669  483467 kubeadm.go:318] 
	I1003 19:40:05.213984  483467 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1003 19:40:05.213994  483467 kubeadm.go:318] 
	I1003 19:40:05.214295  483467 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8444 --token t3ldah.09tb2yxkfmma6h8c \
	I1003 19:40:05.214607  483467 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:f66ff31263aa4cda6b17caa2076838d6a1918275f1c2773b90b119c0d4a4d71a 
	I1003 19:40:05.224654  483467 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1003 19:40:05.224895  483467 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1003 19:40:05.224999  483467 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1003 19:40:05.225016  483467 cni.go:84] Creating CNI manager for ""
	I1003 19:40:05.225027  483467 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 19:40:05.228397  483467 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1003 19:40:05.231268  483467 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1003 19:40:05.241725  483467 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1003 19:40:05.241744  483467 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1003 19:40:05.285091  483467 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1003 19:40:05.899280  483467 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1003 19:40:05.899430  483467 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 19:40:05.899506  483467 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-842797 minikube.k8s.io/updated_at=2025_10_03T19_40_05_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=a43873c79fc22f8b1ccd29d3dfa635d392b09335 minikube.k8s.io/name=default-k8s-diff-port-842797 minikube.k8s.io/primary=true
	I1003 19:40:05.886238  486090 node_ready.go:49] node "embed-certs-327416" is "Ready"
	I1003 19:40:05.886265  486090 node_ready.go:38] duration metric: took 6.280845633s for node "embed-certs-327416" to be "Ready" ...
	I1003 19:40:05.886279  486090 api_server.go:52] waiting for apiserver process to appear ...
	I1003 19:40:05.886356  486090 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 19:40:06.569129  486090 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.015719388s)
	I1003 19:40:08.376120  486090 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.791956347s)
	I1003 19:40:08.376388  486090 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.490019166s)
	I1003 19:40:08.376405  486090 api_server.go:72] duration metric: took 9.439311864s to wait for apiserver process to appear ...
	I1003 19:40:08.376412  486090 api_server.go:88] waiting for apiserver healthz status ...
	I1003 19:40:08.376429  486090 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1003 19:40:08.376351  486090 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.98943414s)
	I1003 19:40:08.380053  486090 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-327416 addons enable metrics-server
	
	I1003 19:40:08.383081  486090 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1003 19:40:08.386051  486090 addons.go:514] duration metric: took 9.448611445s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1003 19:40:08.388956  486090 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1003 19:40:08.391422  486090 api_server.go:141] control plane version: v1.34.1
	I1003 19:40:08.391445  486090 api_server.go:131] duration metric: took 15.027324ms to wait for apiserver health ...
	I1003 19:40:08.391454  486090 system_pods.go:43] waiting for kube-system pods to appear ...
	I1003 19:40:08.395808  486090 system_pods.go:59] 8 kube-system pods found
	I1003 19:40:08.395893  486090 system_pods.go:61] "coredns-66bc5c9577-bjdpd" [17c509e4-9d58-4e2e-9a05-3e6eb361dc8a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1003 19:40:08.395917  486090 system_pods.go:61] "etcd-embed-certs-327416" [292d87c6-b170-473c-94eb-33bf1ec95a97] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1003 19:40:08.395955  486090 system_pods.go:61] "kindnet-2jswv" [b05191d5-b4b3-42d6-8488-25e3b30ad1a1] Running
	I1003 19:40:08.395983  486090 system_pods.go:61] "kube-apiserver-embed-certs-327416" [da030608-0739-46db-a5c1-bd540ab4a19a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1003 19:40:08.396007  486090 system_pods.go:61] "kube-controller-manager-embed-certs-327416" [5b0e00b7-6093-4c79-a1a2-2b21160b65dd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1003 19:40:08.396040  486090 system_pods.go:61] "kube-proxy-ncw55" [54ac7a9a-424b-4c7e-94a8-5a15bc1d91c2] Running
	I1003 19:40:08.396065  486090 system_pods.go:61] "kube-scheduler-embed-certs-327416" [86958be9-5e24-4927-80fd-8e2101189244] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1003 19:40:08.396084  486090 system_pods.go:61] "storage-provisioner" [b02f2aae-4045-452f-aaac-e4bf1daea610] Running
	I1003 19:40:08.396121  486090 system_pods.go:74] duration metric: took 4.659911ms to wait for pod list to return data ...
	I1003 19:40:08.396146  486090 default_sa.go:34] waiting for default service account to be created ...
	I1003 19:40:08.400509  486090 default_sa.go:45] found service account: "default"
	I1003 19:40:08.400584  486090 default_sa.go:55] duration metric: took 4.415666ms for default service account to be created ...
	I1003 19:40:08.400607  486090 system_pods.go:116] waiting for k8s-apps to be running ...
	I1003 19:40:08.404384  486090 system_pods.go:86] 8 kube-system pods found
	I1003 19:40:08.404464  486090 system_pods.go:89] "coredns-66bc5c9577-bjdpd" [17c509e4-9d58-4e2e-9a05-3e6eb361dc8a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1003 19:40:08.404488  486090 system_pods.go:89] "etcd-embed-certs-327416" [292d87c6-b170-473c-94eb-33bf1ec95a97] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1003 19:40:08.404506  486090 system_pods.go:89] "kindnet-2jswv" [b05191d5-b4b3-42d6-8488-25e3b30ad1a1] Running
	I1003 19:40:08.404543  486090 system_pods.go:89] "kube-apiserver-embed-certs-327416" [da030608-0739-46db-a5c1-bd540ab4a19a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1003 19:40:08.404569  486090 system_pods.go:89] "kube-controller-manager-embed-certs-327416" [5b0e00b7-6093-4c79-a1a2-2b21160b65dd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1003 19:40:08.404588  486090 system_pods.go:89] "kube-proxy-ncw55" [54ac7a9a-424b-4c7e-94a8-5a15bc1d91c2] Running
	I1003 19:40:08.404626  486090 system_pods.go:89] "kube-scheduler-embed-certs-327416" [86958be9-5e24-4927-80fd-8e2101189244] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1003 19:40:08.404680  486090 system_pods.go:89] "storage-provisioner" [b02f2aae-4045-452f-aaac-e4bf1daea610] Running
	I1003 19:40:08.404716  486090 system_pods.go:126] duration metric: took 4.089794ms to wait for k8s-apps to be running ...
	I1003 19:40:08.404755  486090 system_svc.go:44] waiting for kubelet service to be running ....
	I1003 19:40:08.404845  486090 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 19:40:06.365683  483467 ops.go:34] apiserver oom_adj: -16
	I1003 19:40:06.365786  483467 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 19:40:06.865955  483467 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 19:40:07.365876  483467 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 19:40:07.866780  483467 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 19:40:08.366254  483467 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 19:40:08.866211  483467 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 19:40:09.366456  483467 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 19:40:09.866524  483467 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 19:40:10.365874  483467 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 19:40:10.865924  483467 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 19:40:11.070612  483467 kubeadm.go:1113] duration metric: took 5.171241197s to wait for elevateKubeSystemPrivileges
	I1003 19:40:11.070646  483467 kubeadm.go:402] duration metric: took 30.84337732s to StartCluster
	I1003 19:40:11.070665  483467 settings.go:142] acquiring lock: {Name:mkc95577dbc448e3409dfa2b5e53a3a1327cb451 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:40:11.070736  483467 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21625-284583/kubeconfig
	I1003 19:40:11.072308  483467 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/kubeconfig: {Name:mkc1323fd87f4a78231a26d2dab0dff7feecf1e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:40:11.072574  483467 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1003 19:40:11.072846  483467 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1003 19:40:11.073142  483467 config.go:182] Loaded profile config "default-k8s-diff-port-842797": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 19:40:11.073193  483467 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1003 19:40:11.073265  483467 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-842797"
	I1003 19:40:11.073285  483467 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-842797"
	I1003 19:40:11.073314  483467 host.go:66] Checking if "default-k8s-diff-port-842797" exists ...
	I1003 19:40:11.073780  483467 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-842797 --format={{.State.Status}}
	I1003 19:40:11.074196  483467 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-842797"
	I1003 19:40:11.074219  483467 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-842797"
	I1003 19:40:11.074496  483467 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-842797 --format={{.State.Status}}
	I1003 19:40:11.076248  483467 out.go:179] * Verifying Kubernetes components...
	I1003 19:40:11.080359  483467 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 19:40:11.129153  483467 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 19:40:11.133820  483467 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-842797"
	I1003 19:40:11.133864  483467 host.go:66] Checking if "default-k8s-diff-port-842797" exists ...
	I1003 19:40:11.134307  483467 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-842797 --format={{.State.Status}}
	I1003 19:40:11.134469  483467 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 19:40:11.134487  483467 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1003 19:40:11.134526  483467 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-842797
	I1003 19:40:11.188251  483467 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1003 19:40:11.188271  483467 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1003 19:40:11.188332  483467 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-842797
	I1003 19:40:11.192700  483467 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/default-k8s-diff-port-842797/id_rsa Username:docker}
	I1003 19:40:11.220860  483467 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/default-k8s-diff-port-842797/id_rsa Username:docker}
	I1003 19:40:11.449266  483467 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1003 19:40:11.449441  483467 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 19:40:11.560602  483467 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1003 19:40:11.572512  483467 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 19:40:12.019928  483467 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-842797" to be "Ready" ...
	I1003 19:40:12.020370  483467 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1003 19:40:12.405145  483467 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1003 19:40:08.432130  486090 system_svc.go:56] duration metric: took 27.367059ms WaitForService to wait for kubelet
	I1003 19:40:08.432211  486090 kubeadm.go:586] duration metric: took 9.495114568s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 19:40:08.432262  486090 node_conditions.go:102] verifying NodePressure condition ...
	I1003 19:40:08.436492  486090 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1003 19:40:08.436569  486090 node_conditions.go:123] node cpu capacity is 2
	I1003 19:40:08.436603  486090 node_conditions.go:105] duration metric: took 4.322651ms to run NodePressure ...
	I1003 19:40:08.436652  486090 start.go:241] waiting for startup goroutines ...
	I1003 19:40:08.436679  486090 start.go:246] waiting for cluster config update ...
	I1003 19:40:08.436708  486090 start.go:255] writing updated cluster config ...
	I1003 19:40:08.437083  486090 ssh_runner.go:195] Run: rm -f paused
	I1003 19:40:08.441513  486090 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1003 19:40:08.500160  486090 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-bjdpd" in "kube-system" namespace to be "Ready" or be gone ...
	W1003 19:40:10.557912  486090 pod_ready.go:104] pod "coredns-66bc5c9577-bjdpd" is not "Ready", error: <nil>
	W1003 19:40:13.015714  486090 pod_ready.go:104] pod "coredns-66bc5c9577-bjdpd" is not "Ready", error: <nil>
	I1003 19:40:12.408846  483467 addons.go:514] duration metric: took 1.335628448s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1003 19:40:12.526333  483467 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-842797" context rescaled to 1 replicas
	W1003 19:40:14.024181  483467 node_ready.go:57] node "default-k8s-diff-port-842797" has "Ready":"False" status (will retry)
	W1003 19:40:15.511756  486090 pod_ready.go:104] pod "coredns-66bc5c9577-bjdpd" is not "Ready", error: <nil>
	W1003 19:40:17.517009  486090 pod_ready.go:104] pod "coredns-66bc5c9577-bjdpd" is not "Ready", error: <nil>
	W1003 19:40:16.524209  483467 node_ready.go:57] node "default-k8s-diff-port-842797" has "Ready":"False" status (will retry)
	W1003 19:40:19.024225  483467 node_ready.go:57] node "default-k8s-diff-port-842797" has "Ready":"False" status (will retry)
	W1003 19:40:20.009685  486090 pod_ready.go:104] pod "coredns-66bc5c9577-bjdpd" is not "Ready", error: <nil>
	W1003 19:40:22.011900  486090 pod_ready.go:104] pod "coredns-66bc5c9577-bjdpd" is not "Ready", error: <nil>
	W1003 19:40:21.523293  483467 node_ready.go:57] node "default-k8s-diff-port-842797" has "Ready":"False" status (will retry)
	W1003 19:40:23.523534  483467 node_ready.go:57] node "default-k8s-diff-port-842797" has "Ready":"False" status (will retry)
	W1003 19:40:25.523686  483467 node_ready.go:57] node "default-k8s-diff-port-842797" has "Ready":"False" status (will retry)
	W1003 19:40:24.506338  486090 pod_ready.go:104] pod "coredns-66bc5c9577-bjdpd" is not "Ready", error: <nil>
	W1003 19:40:27.008049  486090 pod_ready.go:104] pod "coredns-66bc5c9577-bjdpd" is not "Ready", error: <nil>
	W1003 19:40:28.023461  483467 node_ready.go:57] node "default-k8s-diff-port-842797" has "Ready":"False" status (will retry)
	W1003 19:40:30.025923  483467 node_ready.go:57] node "default-k8s-diff-port-842797" has "Ready":"False" status (will retry)
	W1003 19:40:29.008193  486090 pod_ready.go:104] pod "coredns-66bc5c9577-bjdpd" is not "Ready", error: <nil>
	W1003 19:40:31.507528  486090 pod_ready.go:104] pod "coredns-66bc5c9577-bjdpd" is not "Ready", error: <nil>
	W1003 19:40:32.523028  483467 node_ready.go:57] node "default-k8s-diff-port-842797" has "Ready":"False" status (will retry)
	W1003 19:40:34.523316  483467 node_ready.go:57] node "default-k8s-diff-port-842797" has "Ready":"False" status (will retry)
	W1003 19:40:34.011446  486090 pod_ready.go:104] pod "coredns-66bc5c9577-bjdpd" is not "Ready", error: <nil>
	W1003 19:40:36.505721  486090 pod_ready.go:104] pod "coredns-66bc5c9577-bjdpd" is not "Ready", error: <nil>
	W1003 19:40:37.023775  483467 node_ready.go:57] node "default-k8s-diff-port-842797" has "Ready":"False" status (will retry)
	W1003 19:40:39.523284  483467 node_ready.go:57] node "default-k8s-diff-port-842797" has "Ready":"False" status (will retry)
	W1003 19:40:38.508590  486090 pod_ready.go:104] pod "coredns-66bc5c9577-bjdpd" is not "Ready", error: <nil>
	I1003 19:40:39.508669  486090 pod_ready.go:94] pod "coredns-66bc5c9577-bjdpd" is "Ready"
	I1003 19:40:39.508702  486090 pod_ready.go:86] duration metric: took 31.008454932s for pod "coredns-66bc5c9577-bjdpd" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:40:39.511349  486090 pod_ready.go:83] waiting for pod "etcd-embed-certs-327416" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:40:39.516072  486090 pod_ready.go:94] pod "etcd-embed-certs-327416" is "Ready"
	I1003 19:40:39.516099  486090 pod_ready.go:86] duration metric: took 4.722724ms for pod "etcd-embed-certs-327416" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:40:39.518442  486090 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-327416" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:40:39.524871  486090 pod_ready.go:94] pod "kube-apiserver-embed-certs-327416" is "Ready"
	I1003 19:40:39.524898  486090 pod_ready.go:86] duration metric: took 6.427628ms for pod "kube-apiserver-embed-certs-327416" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:40:39.527447  486090 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-327416" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:40:39.704173  486090 pod_ready.go:94] pod "kube-controller-manager-embed-certs-327416" is "Ready"
	I1003 19:40:39.704202  486090 pod_ready.go:86] duration metric: took 176.734521ms for pod "kube-controller-manager-embed-certs-327416" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:40:39.904691  486090 pod_ready.go:83] waiting for pod "kube-proxy-ncw55" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:40:40.303788  486090 pod_ready.go:94] pod "kube-proxy-ncw55" is "Ready"
	I1003 19:40:40.303818  486090 pod_ready.go:86] duration metric: took 399.10123ms for pod "kube-proxy-ncw55" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:40:40.505055  486090 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-327416" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:40:40.904471  486090 pod_ready.go:94] pod "kube-scheduler-embed-certs-327416" is "Ready"
	I1003 19:40:40.904502  486090 pod_ready.go:86] duration metric: took 399.421096ms for pod "kube-scheduler-embed-certs-327416" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:40:40.904515  486090 pod_ready.go:40] duration metric: took 32.462920533s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1003 19:40:40.956798  486090 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1003 19:40:40.959742  486090 out.go:179] * Done! kubectl is now configured to use "embed-certs-327416" cluster and "default" namespace by default
	W1003 19:40:41.523805  483467 node_ready.go:57] node "default-k8s-diff-port-842797" has "Ready":"False" status (will retry)
	W1003 19:40:44.024139  483467 node_ready.go:57] node "default-k8s-diff-port-842797" has "Ready":"False" status (will retry)
	W1003 19:40:46.523635  483467 node_ready.go:57] node "default-k8s-diff-port-842797" has "Ready":"False" status (will retry)
	W1003 19:40:48.523943  483467 node_ready.go:57] node "default-k8s-diff-port-842797" has "Ready":"False" status (will retry)
	W1003 19:40:50.524217  483467 node_ready.go:57] node "default-k8s-diff-port-842797" has "Ready":"False" status (will retry)
	I1003 19:40:52.027836  483467 node_ready.go:49] node "default-k8s-diff-port-842797" is "Ready"
	I1003 19:40:52.027862  483467 node_ready.go:38] duration metric: took 40.007850149s for node "default-k8s-diff-port-842797" to be "Ready" ...
	I1003 19:40:52.027877  483467 api_server.go:52] waiting for apiserver process to appear ...
	I1003 19:40:52.027944  483467 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 19:40:52.040014  483467 api_server.go:72] duration metric: took 40.967403235s to wait for apiserver process to appear ...
	I1003 19:40:52.040039  483467 api_server.go:88] waiting for apiserver healthz status ...
	I1003 19:40:52.040072  483467 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1003 19:40:52.048508  483467 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1003 19:40:52.049767  483467 api_server.go:141] control plane version: v1.34.1
	I1003 19:40:52.049795  483467 api_server.go:131] duration metric: took 9.749928ms to wait for apiserver health ...
	I1003 19:40:52.049805  483467 system_pods.go:43] waiting for kube-system pods to appear ...
	I1003 19:40:52.053328  483467 system_pods.go:59] 8 kube-system pods found
	I1003 19:40:52.053364  483467 system_pods.go:61] "coredns-66bc5c9577-l8knz" [20442eef-faaa-4dfb-bd27-e8f4fda45d0e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1003 19:40:52.053396  483467 system_pods.go:61] "etcd-default-k8s-diff-port-842797" [8db70af0-84e1-42e2-8676-3db2f2732f13] Running
	I1003 19:40:52.053412  483467 system_pods.go:61] "kindnet-96q8s" [ab4664bf-01c0-4b62-9eb8-f65194dff517] Running
	I1003 19:40:52.053417  483467 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-842797" [c7b2a799-b6f6-4be1-a67c-d603d2a8cd7e] Running
	I1003 19:40:52.053427  483467 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-842797" [44ec1bf9-f1e3-4342-bd43-2202ff291aeb] Running
	I1003 19:40:52.053443  483467 system_pods.go:61] "kube-proxy-gvslj" [3cfa5fdd-13b6-4c43-aa02-a74c256ceed2] Running
	I1003 19:40:52.053449  483467 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-842797" [6aba1d05-eec7-4030-b4ee-2b39cd76ec2a] Running
	I1003 19:40:52.053471  483467 system_pods.go:61] "storage-provisioner" [e700db76-d3d4-422f-8069-cb3a0b9ebe86] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1003 19:40:52.053479  483467 system_pods.go:74] duration metric: took 3.669276ms to wait for pod list to return data ...
	I1003 19:40:52.053489  483467 default_sa.go:34] waiting for default service account to be created ...
	I1003 19:40:52.056280  483467 default_sa.go:45] found service account: "default"
	I1003 19:40:52.056303  483467 default_sa.go:55] duration metric: took 2.805279ms for default service account to be created ...
	I1003 19:40:52.056313  483467 system_pods.go:116] waiting for k8s-apps to be running ...
	I1003 19:40:52.059677  483467 system_pods.go:86] 8 kube-system pods found
	I1003 19:40:52.059777  483467 system_pods.go:89] "coredns-66bc5c9577-l8knz" [20442eef-faaa-4dfb-bd27-e8f4fda45d0e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1003 19:40:52.059816  483467 system_pods.go:89] "etcd-default-k8s-diff-port-842797" [8db70af0-84e1-42e2-8676-3db2f2732f13] Running
	I1003 19:40:52.059837  483467 system_pods.go:89] "kindnet-96q8s" [ab4664bf-01c0-4b62-9eb8-f65194dff517] Running
	I1003 19:40:52.059875  483467 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-842797" [c7b2a799-b6f6-4be1-a67c-d603d2a8cd7e] Running
	I1003 19:40:52.059901  483467 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-842797" [44ec1bf9-f1e3-4342-bd43-2202ff291aeb] Running
	I1003 19:40:52.059924  483467 system_pods.go:89] "kube-proxy-gvslj" [3cfa5fdd-13b6-4c43-aa02-a74c256ceed2] Running
	I1003 19:40:52.059958  483467 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-842797" [6aba1d05-eec7-4030-b4ee-2b39cd76ec2a] Running
	I1003 19:40:52.059997  483467 system_pods.go:89] "storage-provisioner" [e700db76-d3d4-422f-8069-cb3a0b9ebe86] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1003 19:40:52.060442  483467 retry.go:31] will retry after 261.053866ms: missing components: kube-dns
	I1003 19:40:52.329752  483467 system_pods.go:86] 8 kube-system pods found
	I1003 19:40:52.329800  483467 system_pods.go:89] "coredns-66bc5c9577-l8knz" [20442eef-faaa-4dfb-bd27-e8f4fda45d0e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1003 19:40:52.329808  483467 system_pods.go:89] "etcd-default-k8s-diff-port-842797" [8db70af0-84e1-42e2-8676-3db2f2732f13] Running
	I1003 19:40:52.329815  483467 system_pods.go:89] "kindnet-96q8s" [ab4664bf-01c0-4b62-9eb8-f65194dff517] Running
	I1003 19:40:52.329820  483467 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-842797" [c7b2a799-b6f6-4be1-a67c-d603d2a8cd7e] Running
	I1003 19:40:52.329824  483467 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-842797" [44ec1bf9-f1e3-4342-bd43-2202ff291aeb] Running
	I1003 19:40:52.329829  483467 system_pods.go:89] "kube-proxy-gvslj" [3cfa5fdd-13b6-4c43-aa02-a74c256ceed2] Running
	I1003 19:40:52.329833  483467 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-842797" [6aba1d05-eec7-4030-b4ee-2b39cd76ec2a] Running
	I1003 19:40:52.329838  483467 system_pods.go:89] "storage-provisioner" [e700db76-d3d4-422f-8069-cb3a0b9ebe86] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1003 19:40:52.329865  483467 retry.go:31] will retry after 323.054015ms: missing components: kube-dns
	I1003 19:40:52.656855  483467 system_pods.go:86] 8 kube-system pods found
	I1003 19:40:52.656883  483467 system_pods.go:89] "coredns-66bc5c9577-l8knz" [20442eef-faaa-4dfb-bd27-e8f4fda45d0e] Running
	I1003 19:40:52.656890  483467 system_pods.go:89] "etcd-default-k8s-diff-port-842797" [8db70af0-84e1-42e2-8676-3db2f2732f13] Running
	I1003 19:40:52.656896  483467 system_pods.go:89] "kindnet-96q8s" [ab4664bf-01c0-4b62-9eb8-f65194dff517] Running
	I1003 19:40:52.656901  483467 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-842797" [c7b2a799-b6f6-4be1-a67c-d603d2a8cd7e] Running
	I1003 19:40:52.656906  483467 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-842797" [44ec1bf9-f1e3-4342-bd43-2202ff291aeb] Running
	I1003 19:40:52.656912  483467 system_pods.go:89] "kube-proxy-gvslj" [3cfa5fdd-13b6-4c43-aa02-a74c256ceed2] Running
	I1003 19:40:52.656916  483467 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-842797" [6aba1d05-eec7-4030-b4ee-2b39cd76ec2a] Running
	I1003 19:40:52.656921  483467 system_pods.go:89] "storage-provisioner" [e700db76-d3d4-422f-8069-cb3a0b9ebe86] Running
	I1003 19:40:52.656928  483467 system_pods.go:126] duration metric: took 600.610578ms to wait for k8s-apps to be running ...
	I1003 19:40:52.656936  483467 system_svc.go:44] waiting for kubelet service to be running ....
	I1003 19:40:52.656997  483467 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 19:40:52.673541  483467 system_svc.go:56] duration metric: took 16.594891ms WaitForService to wait for kubelet
	I1003 19:40:52.673568  483467 kubeadm.go:586] duration metric: took 41.60096318s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 19:40:52.673585  483467 node_conditions.go:102] verifying NodePressure condition ...
	I1003 19:40:52.677840  483467 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1003 19:40:52.677870  483467 node_conditions.go:123] node cpu capacity is 2
	I1003 19:40:52.677883  483467 node_conditions.go:105] duration metric: took 4.29262ms to run NodePressure ...
	I1003 19:40:52.677895  483467 start.go:241] waiting for startup goroutines ...
	I1003 19:40:52.677903  483467 start.go:246] waiting for cluster config update ...
	I1003 19:40:52.677914  483467 start.go:255] writing updated cluster config ...
	I1003 19:40:52.678211  483467 ssh_runner.go:195] Run: rm -f paused
	I1003 19:40:52.681908  483467 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1003 19:40:52.757076  483467 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-l8knz" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:40:52.762984  483467 pod_ready.go:94] pod "coredns-66bc5c9577-l8knz" is "Ready"
	I1003 19:40:52.763010  483467 pod_ready.go:86] duration metric: took 5.909523ms for pod "coredns-66bc5c9577-l8knz" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:40:52.765790  483467 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-842797" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:40:52.770961  483467 pod_ready.go:94] pod "etcd-default-k8s-diff-port-842797" is "Ready"
	I1003 19:40:52.770981  483467 pod_ready.go:86] duration metric: took 5.173988ms for pod "etcd-default-k8s-diff-port-842797" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:40:52.774100  483467 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-842797" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:40:52.779416  483467 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-842797" is "Ready"
	I1003 19:40:52.779438  483467 pod_ready.go:86] duration metric: took 5.315413ms for pod "kube-apiserver-default-k8s-diff-port-842797" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:40:52.782100  483467 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-842797" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:40:53.086517  483467 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-842797" is "Ready"
	I1003 19:40:53.086550  483467 pod_ready.go:86] duration metric: took 304.4295ms for pod "kube-controller-manager-default-k8s-diff-port-842797" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:40:53.290421  483467 pod_ready.go:83] waiting for pod "kube-proxy-gvslj" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:40:53.686648  483467 pod_ready.go:94] pod "kube-proxy-gvslj" is "Ready"
	I1003 19:40:53.686681  483467 pod_ready.go:86] duration metric: took 396.235813ms for pod "kube-proxy-gvslj" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:40:53.887166  483467 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-842797" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:40:54.286893  483467 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-842797" is "Ready"
	I1003 19:40:54.286916  483467 pod_ready.go:86] duration metric: took 399.662262ms for pod "kube-scheduler-default-k8s-diff-port-842797" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:40:54.286928  483467 pod_ready.go:40] duration metric: took 1.60498969s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1003 19:40:54.371232  483467 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1003 19:40:54.375125  483467 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-842797" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 03 19:40:37 embed-certs-327416 crio[646]: time="2025-10-03T19:40:37.903001254Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=c133263a-f5e8-4a1d-8698-3fa93541c765 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 19:40:37 embed-certs-327416 crio[646]: time="2025-10-03T19:40:37.908705506Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=a19c2d01-8e86-44ca-8d1a-e4a0d4343abc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:40:37 embed-certs-327416 crio[646]: time="2025-10-03T19:40:37.909385771Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 19:40:37 embed-certs-327416 crio[646]: time="2025-10-03T19:40:37.916409593Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 19:40:37 embed-certs-327416 crio[646]: time="2025-10-03T19:40:37.916576791Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/18900d5caa3a3b4bad306d9a2cea9e3a63d7de638d707e02a5586f6e1ee15d9d/merged/etc/passwd: no such file or directory"
	Oct 03 19:40:37 embed-certs-327416 crio[646]: time="2025-10-03T19:40:37.916598478Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/18900d5caa3a3b4bad306d9a2cea9e3a63d7de638d707e02a5586f6e1ee15d9d/merged/etc/group: no such file or directory"
	Oct 03 19:40:37 embed-certs-327416 crio[646]: time="2025-10-03T19:40:37.916870572Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 19:40:37 embed-certs-327416 crio[646]: time="2025-10-03T19:40:37.932542231Z" level=info msg="Created container 5e66dc6a1481b362f77729de7b87a40c80a9b3559f540b5a8bd6f55ec6c8f731: kube-system/storage-provisioner/storage-provisioner" id=a19c2d01-8e86-44ca-8d1a-e4a0d4343abc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:40:37 embed-certs-327416 crio[646]: time="2025-10-03T19:40:37.937049862Z" level=info msg="Starting container: 5e66dc6a1481b362f77729de7b87a40c80a9b3559f540b5a8bd6f55ec6c8f731" id=b9a42191-8cd4-42b6-9c14-44c97c14a514 name=/runtime.v1.RuntimeService/StartContainer
	Oct 03 19:40:37 embed-certs-327416 crio[646]: time="2025-10-03T19:40:37.94088819Z" level=info msg="Started container" PID=1632 containerID=5e66dc6a1481b362f77729de7b87a40c80a9b3559f540b5a8bd6f55ec6c8f731 description=kube-system/storage-provisioner/storage-provisioner id=b9a42191-8cd4-42b6-9c14-44c97c14a514 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a08ec79b430293fc18fe391a6d7e109bd33baac133eb712f4cc8e57ccb685f26
	Oct 03 19:40:47 embed-certs-327416 crio[646]: time="2025-10-03T19:40:47.542787424Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 03 19:40:47 embed-certs-327416 crio[646]: time="2025-10-03T19:40:47.547313935Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 03 19:40:47 embed-certs-327416 crio[646]: time="2025-10-03T19:40:47.547345246Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 03 19:40:47 embed-certs-327416 crio[646]: time="2025-10-03T19:40:47.547368122Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 03 19:40:47 embed-certs-327416 crio[646]: time="2025-10-03T19:40:47.550707333Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 03 19:40:47 embed-certs-327416 crio[646]: time="2025-10-03T19:40:47.550746004Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 03 19:40:47 embed-certs-327416 crio[646]: time="2025-10-03T19:40:47.550769324Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 03 19:40:47 embed-certs-327416 crio[646]: time="2025-10-03T19:40:47.55389578Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 03 19:40:47 embed-certs-327416 crio[646]: time="2025-10-03T19:40:47.554060164Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 03 19:40:47 embed-certs-327416 crio[646]: time="2025-10-03T19:40:47.554096916Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 03 19:40:47 embed-certs-327416 crio[646]: time="2025-10-03T19:40:47.55746185Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 03 19:40:47 embed-certs-327416 crio[646]: time="2025-10-03T19:40:47.557497362Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 03 19:40:47 embed-certs-327416 crio[646]: time="2025-10-03T19:40:47.557522249Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 03 19:40:47 embed-certs-327416 crio[646]: time="2025-10-03T19:40:47.561498023Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 03 19:40:47 embed-certs-327416 crio[646]: time="2025-10-03T19:40:47.561537244Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	5e66dc6a1481b       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           17 seconds ago      Running             storage-provisioner         2                   a08ec79b43029       storage-provisioner                          kube-system
	a738125ff91fa       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           23 seconds ago      Exited              dashboard-metrics-scraper   2                   5c4eb3421c96a       dashboard-metrics-scraper-6ffb444bf9-pdwhc   kubernetes-dashboard
	a789d122b33c0       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   38 seconds ago      Running             kubernetes-dashboard        0                   a3dcd7fd1edef       kubernetes-dashboard-855c9754f9-4hzk6        kubernetes-dashboard
	f08f692651a4c       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           48 seconds ago      Running             coredns                     1                   2e5cbd6315354       coredns-66bc5c9577-bjdpd                     kube-system
	f4b23575b27ca       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           48 seconds ago      Running             busybox                     1                   8c58b36d1d8ac       busybox                                      default
	e082ac152bed0       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           48 seconds ago      Running             kube-proxy                  1                   cda0e0a01f05e       kube-proxy-ncw55                             kube-system
	feab4d04b3ff4       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           48 seconds ago      Running             kindnet-cni                 1                   61aece1b64bff       kindnet-2jswv                                kube-system
	a099b0263e1ca       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           48 seconds ago      Exited              storage-provisioner         1                   a08ec79b43029       storage-provisioner                          kube-system
	7251d8be4bbe1       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           57 seconds ago      Running             kube-controller-manager     1                   c300579b36ce0       kube-controller-manager-embed-certs-327416   kube-system
	d175d98dcd2f4       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           57 seconds ago      Running             kube-apiserver              1                   c899c8cde9b07       kube-apiserver-embed-certs-327416            kube-system
	58e88d8c2849a       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           57 seconds ago      Running             kube-scheduler              1                   096ab3d677b68       kube-scheduler-embed-certs-327416            kube-system
	0c6c5a56f754c       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           57 seconds ago      Running             etcd                        1                   3060be50efc34       etcd-embed-certs-327416                      kube-system
	
	
	==> coredns [f08f692651a4c24dbc7f5c2d01b62f4b3444fe292b2f5c83c3522aac293a2680] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:32920 - 42303 "HINFO IN 3835729374393202696.7808947450009168741. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.024206034s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-327416
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-327416
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a43873c79fc22f8b1ccd29d3dfa635d392b09335
	                    minikube.k8s.io/name=embed-certs-327416
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_03T19_38_33_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 03 Oct 2025 19:38:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-327416
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 03 Oct 2025 19:40:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 03 Oct 2025 19:40:36 +0000   Fri, 03 Oct 2025 19:38:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 03 Oct 2025 19:40:36 +0000   Fri, 03 Oct 2025 19:38:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 03 Oct 2025 19:40:36 +0000   Fri, 03 Oct 2025 19:38:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 03 Oct 2025 19:40:36 +0000   Fri, 03 Oct 2025 19:39:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-327416
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c7fbf585ecd44806b77018469dd7b7db
	  System UUID:                fb79a29c-023c-4bd8-a646-01fac5e931e0
	  Boot ID:                    3762136e-8bec-4104-a5cb-0b1976f6048e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-66bc5c9577-bjdpd                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m18s
	  kube-system                 etcd-embed-certs-327416                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m23s
	  kube-system                 kindnet-2jswv                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m18s
	  kube-system                 kube-apiserver-embed-certs-327416             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 kube-controller-manager-embed-certs-327416    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m26s
	  kube-system                 kube-proxy-ncw55                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m18s
	  kube-system                 kube-scheduler-embed-certs-327416             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m17s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-pdwhc    0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-4hzk6         0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m16s                  kube-proxy       
	  Normal   Starting                 47s                    kube-proxy       
	  Normal   NodeHasSufficientMemory  2m32s (x8 over 2m32s)  kubelet          Node embed-certs-327416 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m32s (x8 over 2m32s)  kubelet          Node embed-certs-327416 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m32s (x8 over 2m32s)  kubelet          Node embed-certs-327416 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m23s                  kubelet          Node embed-certs-327416 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m23s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m23s                  kubelet          Node embed-certs-327416 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m23s                  kubelet          Node embed-certs-327416 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m23s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m19s                  node-controller  Node embed-certs-327416 event: Registered Node embed-certs-327416 in Controller
	  Normal   NodeReady                96s                    kubelet          Node embed-certs-327416 status is now: NodeReady
	  Normal   Starting                 58s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 58s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  58s (x8 over 58s)      kubelet          Node embed-certs-327416 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    58s (x8 over 58s)      kubelet          Node embed-certs-327416 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     58s (x8 over 58s)      kubelet          Node embed-certs-327416 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           45s                    node-controller  Node embed-certs-327416 event: Registered Node embed-certs-327416 in Controller
	
	
	==> dmesg <==
	[Oct 3 19:11] overlayfs: idmapped layers are currently not supported
	[  +4.287643] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:12] overlayfs: idmapped layers are currently not supported
	[ +24.839009] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:13] overlayfs: idmapped layers are currently not supported
	[ +26.493253] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:15] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:16] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:17] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000010] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Oct 3 19:18] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:20] overlayfs: idmapped layers are currently not supported
	[ +32.018892] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:22] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:24] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:26] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:32] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:34] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:35] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:36] overlayfs: idmapped layers are currently not supported
	[  +4.740983] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:38] overlayfs: idmapped layers are currently not supported
	[ +12.897300] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:39] overlayfs: idmapped layers are currently not supported
	[  +4.104516] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [0c6c5a56f754c48cee635b6a3f179cd14335b49d4105c542ea8de2a52f7a1289] <==
	{"level":"warn","ts":"2025-10-03T19:40:03.275933Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:40:03.341638Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:40:03.402043Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:40:03.432981Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:40:03.497717Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:40:03.537558Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:40:03.584906Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:40:03.626142Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:40:03.660932Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:40:03.707810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:40:03.766595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:40:03.901767Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:40:03.937810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:40:03.969646Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:40:04.008508Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:40:04.025004Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:40:04.047218Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:40:04.071160Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:40:04.107014Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:40:04.148382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:40:04.192365Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:40:04.243472Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:40:04.305534Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:40:04.356821Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:40:04.523305Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43074","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 19:40:56 up  2:23,  0 user,  load average: 3.72, 3.25, 2.42
	Linux embed-certs-327416 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [feab4d04b3ff4dcec9c7a34ced7bd215e07b33afff0b593771ec98a30d1421e9] <==
	I1003 19:40:07.328195       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1003 19:40:07.328692       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1003 19:40:07.335212       1 main.go:148] setting mtu 1500 for CNI 
	I1003 19:40:07.335244       1 main.go:178] kindnetd IP family: "ipv4"
	I1003 19:40:07.335262       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-03T19:40:07Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1003 19:40:07.538932       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1003 19:40:07.538949       1 controller.go:381] "Waiting for informer caches to sync"
	I1003 19:40:07.538957       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1003 19:40:07.539234       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1003 19:40:37.538870       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1003 19:40:37.538997       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1003 19:40:37.539878       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1003 19:40:37.562395       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1003 19:40:39.039980       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1003 19:40:39.040015       1 metrics.go:72] Registering metrics
	I1003 19:40:39.040087       1 controller.go:711] "Syncing nftables rules"
	I1003 19:40:47.542416       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1003 19:40:47.542474       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d175d98dcd2f4aad68e57c312506a537fcec4add7ab32b2ffa4c3126efd41601] <==
	I1003 19:40:06.066518       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1003 19:40:06.066980       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1003 19:40:06.102218       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1003 19:40:06.102264       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1003 19:40:06.102294       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1003 19:40:06.102331       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1003 19:40:06.118651       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1003 19:40:06.138080       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1003 19:40:06.148906       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1003 19:40:06.150191       1 aggregator.go:171] initial CRD sync complete...
	I1003 19:40:06.150225       1 autoregister_controller.go:144] Starting autoregister controller
	I1003 19:40:06.150233       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1003 19:40:06.150240       1 cache.go:39] Caches are synced for autoregister controller
	E1003 19:40:06.294444       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1003 19:40:06.563465       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1003 19:40:06.681907       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1003 19:40:07.682858       1 controller.go:667] quota admission added evaluator for: namespaces
	I1003 19:40:07.857597       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1003 19:40:08.005782       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1003 19:40:08.094510       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1003 19:40:08.289518       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.56.130"}
	I1003 19:40:08.324167       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.225.1"}
	I1003 19:40:10.465384       1 controller.go:667] quota admission added evaluator for: endpoints
	I1003 19:40:10.573960       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1003 19:40:10.637625       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [7251d8be4bbe1feadb8d7586aad5c359dbd66fd31d01b439cbe4b247e9edacb9] <==
	I1003 19:40:10.217538       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1003 19:40:10.210395       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1003 19:40:10.221012       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1003 19:40:10.221689       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1003 19:40:10.222857       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1003 19:40:10.223442       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1003 19:40:10.224665       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1003 19:40:10.236547       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1003 19:40:10.239511       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1003 19:40:10.240869       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1003 19:40:10.243007       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1003 19:40:10.249390       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1003 19:40:10.251620       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1003 19:40:10.257402       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1003 19:40:10.257612       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1003 19:40:10.257682       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1003 19:40:10.263746       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1003 19:40:10.263866       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1003 19:40:10.268770       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1003 19:40:10.274922       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1003 19:40:10.276761       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1003 19:40:10.279087       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1003 19:40:10.309872       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1003 19:40:10.315811       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1003 19:40:10.315877       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	
	
	==> kube-proxy [e082ac152bed0226fa5fbaf16b5adae1367f37de196398b9aa393d4b2682c3bb] <==
	I1003 19:40:08.360892       1 server_linux.go:53] "Using iptables proxy"
	I1003 19:40:08.534450       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1003 19:40:08.642693       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1003 19:40:08.642816       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1003 19:40:08.646121       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1003 19:40:08.695242       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1003 19:40:08.695395       1 server_linux.go:132] "Using iptables Proxier"
	I1003 19:40:08.701460       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1003 19:40:08.715260       1 server.go:527] "Version info" version="v1.34.1"
	I1003 19:40:08.715295       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1003 19:40:08.717432       1 config.go:200] "Starting service config controller"
	I1003 19:40:08.717457       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1003 19:40:08.717491       1 config.go:106] "Starting endpoint slice config controller"
	I1003 19:40:08.717496       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1003 19:40:08.717514       1 config.go:403] "Starting serviceCIDR config controller"
	I1003 19:40:08.717518       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1003 19:40:08.718461       1 config.go:309] "Starting node config controller"
	I1003 19:40:08.718483       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1003 19:40:08.718491       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1003 19:40:08.825645       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1003 19:40:08.825757       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1003 19:40:08.825768       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [58e88d8c2849a5437eb7767eb255d61ad53372f61e98f7b15fba814d13e38b12] <==
	I1003 19:40:08.536036       1 serving.go:386] Generated self-signed cert in-memory
	I1003 19:40:10.452931       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1003 19:40:10.452979       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1003 19:40:10.476523       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1003 19:40:10.476626       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1003 19:40:10.476656       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1003 19:40:10.476694       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1003 19:40:10.483119       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1003 19:40:10.485599       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1003 19:40:10.485969       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1003 19:40:10.485979       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1003 19:40:10.577035       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1003 19:40:10.586684       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1003 19:40:10.586830       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Oct 03 19:40:10 embed-certs-327416 kubelet[771]: I1003 19:40:10.961023     771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/4e9fe78a-88e3-4ce0-9e2e-9e4442ab2967-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-4hzk6\" (UID: \"4e9fe78a-88e3-4ce0-9e2e-9e4442ab2967\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4hzk6"
	Oct 03 19:40:10 embed-certs-327416 kubelet[771]: I1003 19:40:10.961098     771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wjrj\" (UniqueName: \"kubernetes.io/projected/4e9fe78a-88e3-4ce0-9e2e-9e4442ab2967-kube-api-access-6wjrj\") pod \"kubernetes-dashboard-855c9754f9-4hzk6\" (UID: \"4e9fe78a-88e3-4ce0-9e2e-9e4442ab2967\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4hzk6"
	Oct 03 19:40:11 embed-certs-327416 kubelet[771]: E1003 19:40:11.979567     771 projected.go:291] Couldn't get configMap kubernetes-dashboard/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Oct 03 19:40:11 embed-certs-327416 kubelet[771]: E1003 19:40:11.979634     771 projected.go:196] Error preparing data for projected volume kube-api-access-pt58f for pod kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pdwhc: failed to sync configmap cache: timed out waiting for the condition
	Oct 03 19:40:11 embed-certs-327416 kubelet[771]: E1003 19:40:11.979739     771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5e6f44a6-6e24-4a4a-be20-79f9cfa4dc30-kube-api-access-pt58f podName:5e6f44a6-6e24-4a4a-be20-79f9cfa4dc30 nodeName:}" failed. No retries permitted until 2025-10-03 19:40:12.479707272 +0000 UTC m=+15.238358753 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pt58f" (UniqueName: "kubernetes.io/projected/5e6f44a6-6e24-4a4a-be20-79f9cfa4dc30-kube-api-access-pt58f") pod "dashboard-metrics-scraper-6ffb444bf9-pdwhc" (UID: "5e6f44a6-6e24-4a4a-be20-79f9cfa4dc30") : failed to sync configmap cache: timed out waiting for the condition
	Oct 03 19:40:12 embed-certs-327416 kubelet[771]: W1003 19:40:12.076595     771 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/7044b9fbdfefb3fd8bce7381adae2abdcd93d79fb8452cc72e2f26e58ccd8222/crio-a3dcd7fd1edefcc4b713308880237296fbe694ed3868c2fc919d67ecbf22e208 WatchSource:0}: Error finding container a3dcd7fd1edefcc4b713308880237296fbe694ed3868c2fc919d67ecbf22e208: Status 404 returned error can't find the container with id a3dcd7fd1edefcc4b713308880237296fbe694ed3868c2fc919d67ecbf22e208
	Oct 03 19:40:12 embed-certs-327416 kubelet[771]: W1003 19:40:12.657298     771 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/7044b9fbdfefb3fd8bce7381adae2abdcd93d79fb8452cc72e2f26e58ccd8222/crio-5c4eb3421c96a161a04d970875259c032e6566f813f81c5087e5b90315e4087e WatchSource:0}: Error finding container 5c4eb3421c96a161a04d970875259c032e6566f813f81c5087e5b90315e4087e: Status 404 returned error can't find the container with id 5c4eb3421c96a161a04d970875259c032e6566f813f81c5087e5b90315e4087e
	Oct 03 19:40:21 embed-certs-327416 kubelet[771]: I1003 19:40:21.852775     771 scope.go:117] "RemoveContainer" containerID="a29f167ca9d8327aa605d948ba460fdb021614a61d566ba513e53dbdfeeb2206"
	Oct 03 19:40:21 embed-certs-327416 kubelet[771]: I1003 19:40:21.883079     771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4hzk6" podStartSLOduration=7.201842542 podStartE2EDuration="11.883060988s" podCreationTimestamp="2025-10-03 19:40:10 +0000 UTC" firstStartedPulling="2025-10-03 19:40:12.087076965 +0000 UTC m=+14.845728445" lastFinishedPulling="2025-10-03 19:40:16.76829541 +0000 UTC m=+19.526946891" observedRunningTime="2025-10-03 19:40:16.874420136 +0000 UTC m=+19.633071617" watchObservedRunningTime="2025-10-03 19:40:21.883060988 +0000 UTC m=+24.641712469"
	Oct 03 19:40:22 embed-certs-327416 kubelet[771]: I1003 19:40:22.856931     771 scope.go:117] "RemoveContainer" containerID="a29f167ca9d8327aa605d948ba460fdb021614a61d566ba513e53dbdfeeb2206"
	Oct 03 19:40:22 embed-certs-327416 kubelet[771]: I1003 19:40:22.857234     771 scope.go:117] "RemoveContainer" containerID="2bdaa5b7d0db718394917a8fcfe82c67f2bf8b9950ac1ba169c79c77673ff700"
	Oct 03 19:40:22 embed-certs-327416 kubelet[771]: E1003 19:40:22.858280     771 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-pdwhc_kubernetes-dashboard(5e6f44a6-6e24-4a4a-be20-79f9cfa4dc30)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pdwhc" podUID="5e6f44a6-6e24-4a4a-be20-79f9cfa4dc30"
	Oct 03 19:40:23 embed-certs-327416 kubelet[771]: I1003 19:40:23.861395     771 scope.go:117] "RemoveContainer" containerID="2bdaa5b7d0db718394917a8fcfe82c67f2bf8b9950ac1ba169c79c77673ff700"
	Oct 03 19:40:23 embed-certs-327416 kubelet[771]: E1003 19:40:23.861926     771 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-pdwhc_kubernetes-dashboard(5e6f44a6-6e24-4a4a-be20-79f9cfa4dc30)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pdwhc" podUID="5e6f44a6-6e24-4a4a-be20-79f9cfa4dc30"
	Oct 03 19:40:32 embed-certs-327416 kubelet[771]: I1003 19:40:32.582102     771 scope.go:117] "RemoveContainer" containerID="2bdaa5b7d0db718394917a8fcfe82c67f2bf8b9950ac1ba169c79c77673ff700"
	Oct 03 19:40:32 embed-certs-327416 kubelet[771]: I1003 19:40:32.883522     771 scope.go:117] "RemoveContainer" containerID="2bdaa5b7d0db718394917a8fcfe82c67f2bf8b9950ac1ba169c79c77673ff700"
	Oct 03 19:40:32 embed-certs-327416 kubelet[771]: I1003 19:40:32.883794     771 scope.go:117] "RemoveContainer" containerID="a738125ff91fa9557f957b47e040af0afc4e0c20eba8d133f0a7232ec66b0d66"
	Oct 03 19:40:32 embed-certs-327416 kubelet[771]: E1003 19:40:32.883955     771 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-pdwhc_kubernetes-dashboard(5e6f44a6-6e24-4a4a-be20-79f9cfa4dc30)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pdwhc" podUID="5e6f44a6-6e24-4a4a-be20-79f9cfa4dc30"
	Oct 03 19:40:37 embed-certs-327416 kubelet[771]: I1003 19:40:37.898146     771 scope.go:117] "RemoveContainer" containerID="a099b0263e1ca1acdf33e1af73c68951785e54c0ba213fdfbcb1bb8d81e98644"
	Oct 03 19:40:42 embed-certs-327416 kubelet[771]: I1003 19:40:42.581499     771 scope.go:117] "RemoveContainer" containerID="a738125ff91fa9557f957b47e040af0afc4e0c20eba8d133f0a7232ec66b0d66"
	Oct 03 19:40:42 embed-certs-327416 kubelet[771]: E1003 19:40:42.581690     771 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-pdwhc_kubernetes-dashboard(5e6f44a6-6e24-4a4a-be20-79f9cfa4dc30)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pdwhc" podUID="5e6f44a6-6e24-4a4a-be20-79f9cfa4dc30"
	Oct 03 19:40:53 embed-certs-327416 kubelet[771]: I1003 19:40:53.206540     771 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 03 19:40:53 embed-certs-327416 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 03 19:40:53 embed-certs-327416 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 03 19:40:53 embed-certs-327416 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [a789d122b33c055f37ef455982128473a2a103a67ed53fffdb7d04275c3e1c56] <==
	2025/10/03 19:40:16 Using namespace: kubernetes-dashboard
	2025/10/03 19:40:16 Using in-cluster config to connect to apiserver
	2025/10/03 19:40:16 Using secret token for csrf signing
	2025/10/03 19:40:16 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/03 19:40:16 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/03 19:40:16 Successful initial request to the apiserver, version: v1.34.1
	2025/10/03 19:40:16 Generating JWE encryption key
	2025/10/03 19:40:16 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/03 19:40:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/03 19:40:18 Initializing JWE encryption key from synchronized object
	2025/10/03 19:40:18 Creating in-cluster Sidecar client
	2025/10/03 19:40:18 Serving insecurely on HTTP port: 9090
	2025/10/03 19:40:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/03 19:40:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/03 19:40:16 Starting overwatch
	
	
	==> storage-provisioner [5e66dc6a1481b362f77729de7b87a40c80a9b3559f540b5a8bd6f55ec6c8f731] <==
	I1003 19:40:37.955845       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1003 19:40:37.970702       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1003 19:40:37.970834       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1003 19:40:37.973222       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:40:41.428441       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:40:45.688913       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:40:49.286982       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:40:52.340357       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:40:55.362911       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:40:55.370277       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1003 19:40:55.370438       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1003 19:40:55.370597       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-327416_e05b3a86-f980-4eb5-948f-9c7316119d8a!
	I1003 19:40:55.371213       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"879423f6-b7ff-450e-9e7a-f9f8ef1edeae", APIVersion:"v1", ResourceVersion:"685", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-327416_e05b3a86-f980-4eb5-948f-9c7316119d8a became leader
	W1003 19:40:55.375712       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:40:55.379609       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1003 19:40:55.471687       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-327416_e05b3a86-f980-4eb5-948f-9c7316119d8a!
	
	
	==> storage-provisioner [a099b0263e1ca1acdf33e1af73c68951785e54c0ba213fdfbcb1bb8d81e98644] <==
	I1003 19:40:07.689476       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1003 19:40:37.695190       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-327416 -n embed-certs-327416
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-327416 -n embed-certs-327416: exit status 2 (354.109346ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-327416 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-327416
helpers_test.go:243: (dbg) docker inspect embed-certs-327416:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7044b9fbdfefb3fd8bce7381adae2abdcd93d79fb8452cc72e2f26e58ccd8222",
	        "Created": "2025-10-03T19:37:58.41651583Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 486220,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-03T19:39:48.806953605Z",
	            "FinishedAt": "2025-10-03T19:39:47.763628031Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/7044b9fbdfefb3fd8bce7381adae2abdcd93d79fb8452cc72e2f26e58ccd8222/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7044b9fbdfefb3fd8bce7381adae2abdcd93d79fb8452cc72e2f26e58ccd8222/hostname",
	        "HostsPath": "/var/lib/docker/containers/7044b9fbdfefb3fd8bce7381adae2abdcd93d79fb8452cc72e2f26e58ccd8222/hosts",
	        "LogPath": "/var/lib/docker/containers/7044b9fbdfefb3fd8bce7381adae2abdcd93d79fb8452cc72e2f26e58ccd8222/7044b9fbdfefb3fd8bce7381adae2abdcd93d79fb8452cc72e2f26e58ccd8222-json.log",
	        "Name": "/embed-certs-327416",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-327416:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-327416",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7044b9fbdfefb3fd8bce7381adae2abdcd93d79fb8452cc72e2f26e58ccd8222",
	                "LowerDir": "/var/lib/docker/overlay2/6d78601b2f0a3bddd2f05c4f4ab25e1cdd9b0b6f0850c52b546e1909596049d0-init/diff:/var/lib/docker/overlay2/87b205803817b0b71a214d995ab7e10a92033bbf72d76d6e052f1d21ccecb313/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6d78601b2f0a3bddd2f05c4f4ab25e1cdd9b0b6f0850c52b546e1909596049d0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6d78601b2f0a3bddd2f05c4f4ab25e1cdd9b0b6f0850c52b546e1909596049d0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6d78601b2f0a3bddd2f05c4f4ab25e1cdd9b0b6f0850c52b546e1909596049d0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-327416",
	                "Source": "/var/lib/docker/volumes/embed-certs-327416/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-327416",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-327416",
	                "name.minikube.sigs.k8s.io": "embed-certs-327416",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5165a1bbcf39ee90384661feb28d1fdb04ed8d0177377d647e91922cec0c0d98",
	            "SandboxKey": "/var/run/docker/netns/5165a1bbcf39",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33448"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33449"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33452"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33450"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33451"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-327416": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "96:a3:14:c5:c5:dc",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "438dfcb24f609c25637ab5cf83b5d0d8692bb34419c32369c46f82797d6523d1",
	                    "EndpointID": "cd81684c49c278d681da1feb21c437776a458070e42b2ac96a692e3d08c6914c",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-327416",
	                        "7044b9fbdfef"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-327416 -n embed-certs-327416
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-327416 -n embed-certs-327416: exit status 2 (459.532329ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-327416 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-327416 logs -n 25: (1.286539703s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p cert-expiration-324520                                                                                                                                                                                                                     │ cert-expiration-324520       │ jenkins │ v1.37.0 │ 03 Oct 25 19:36 UTC │ 03 Oct 25 19:36 UTC │
	│ start   │ -p no-preload-643397 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-643397            │ jenkins │ v1.37.0 │ 03 Oct 25 19:36 UTC │ 03 Oct 25 19:37 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-174543 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-174543       │ jenkins │ v1.37.0 │ 03 Oct 25 19:36 UTC │ 03 Oct 25 19:36 UTC │
	│ start   │ -p old-k8s-version-174543 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-174543       │ jenkins │ v1.37.0 │ 03 Oct 25 19:36 UTC │ 03 Oct 25 19:37 UTC │
	│ image   │ old-k8s-version-174543 image list --format=json                                                                                                                                                                                               │ old-k8s-version-174543       │ jenkins │ v1.37.0 │ 03 Oct 25 19:37 UTC │ 03 Oct 25 19:37 UTC │
	│ pause   │ -p old-k8s-version-174543 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-174543       │ jenkins │ v1.37.0 │ 03 Oct 25 19:37 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-643397 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-643397            │ jenkins │ v1.37.0 │ 03 Oct 25 19:37 UTC │                     │
	│ stop    │ -p no-preload-643397 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-643397            │ jenkins │ v1.37.0 │ 03 Oct 25 19:37 UTC │ 03 Oct 25 19:38 UTC │
	│ delete  │ -p old-k8s-version-174543                                                                                                                                                                                                                     │ old-k8s-version-174543       │ jenkins │ v1.37.0 │ 03 Oct 25 19:37 UTC │ 03 Oct 25 19:37 UTC │
	│ delete  │ -p old-k8s-version-174543                                                                                                                                                                                                                     │ old-k8s-version-174543       │ jenkins │ v1.37.0 │ 03 Oct 25 19:37 UTC │ 03 Oct 25 19:37 UTC │
	│ start   │ -p embed-certs-327416 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-327416           │ jenkins │ v1.37.0 │ 03 Oct 25 19:37 UTC │ 03 Oct 25 19:39 UTC │
	│ addons  │ enable dashboard -p no-preload-643397 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-643397            │ jenkins │ v1.37.0 │ 03 Oct 25 19:38 UTC │ 03 Oct 25 19:38 UTC │
	│ start   │ -p no-preload-643397 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-643397            │ jenkins │ v1.37.0 │ 03 Oct 25 19:38 UTC │ 03 Oct 25 19:39 UTC │
	│ image   │ no-preload-643397 image list --format=json                                                                                                                                                                                                    │ no-preload-643397            │ jenkins │ v1.37.0 │ 03 Oct 25 19:39 UTC │ 03 Oct 25 19:39 UTC │
	│ pause   │ -p no-preload-643397 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-643397            │ jenkins │ v1.37.0 │ 03 Oct 25 19:39 UTC │                     │
	│ delete  │ -p no-preload-643397                                                                                                                                                                                                                          │ no-preload-643397            │ jenkins │ v1.37.0 │ 03 Oct 25 19:39 UTC │ 03 Oct 25 19:39 UTC │
	│ delete  │ -p no-preload-643397                                                                                                                                                                                                                          │ no-preload-643397            │ jenkins │ v1.37.0 │ 03 Oct 25 19:39 UTC │ 03 Oct 25 19:39 UTC │
	│ delete  │ -p disable-driver-mounts-839513                                                                                                                                                                                                               │ disable-driver-mounts-839513 │ jenkins │ v1.37.0 │ 03 Oct 25 19:39 UTC │ 03 Oct 25 19:39 UTC │
	│ start   │ -p default-k8s-diff-port-842797 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-842797 │ jenkins │ v1.37.0 │ 03 Oct 25 19:39 UTC │ 03 Oct 25 19:40 UTC │
	│ addons  │ enable metrics-server -p embed-certs-327416 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-327416           │ jenkins │ v1.37.0 │ 03 Oct 25 19:39 UTC │                     │
	│ stop    │ -p embed-certs-327416 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-327416           │ jenkins │ v1.37.0 │ 03 Oct 25 19:39 UTC │ 03 Oct 25 19:39 UTC │
	│ addons  │ enable dashboard -p embed-certs-327416 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-327416           │ jenkins │ v1.37.0 │ 03 Oct 25 19:39 UTC │ 03 Oct 25 19:39 UTC │
	│ start   │ -p embed-certs-327416 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-327416           │ jenkins │ v1.37.0 │ 03 Oct 25 19:39 UTC │ 03 Oct 25 19:40 UTC │
	│ image   │ embed-certs-327416 image list --format=json                                                                                                                                                                                                   │ embed-certs-327416           │ jenkins │ v1.37.0 │ 03 Oct 25 19:40 UTC │ 03 Oct 25 19:40 UTC │
	│ pause   │ -p embed-certs-327416 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-327416           │ jenkins │ v1.37.0 │ 03 Oct 25 19:40 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/03 19:39:48
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 19:39:48.406056  486090 out.go:360] Setting OutFile to fd 1 ...
	I1003 19:39:48.406272  486090 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 19:39:48.406299  486090 out.go:374] Setting ErrFile to fd 2...
	I1003 19:39:48.406317  486090 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 19:39:48.406630  486090 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-284583/.minikube/bin
	I1003 19:39:48.408003  486090 out.go:368] Setting JSON to false
	I1003 19:39:48.409020  486090 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8540,"bootTime":1759511849,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1003 19:39:48.409111  486090 start.go:140] virtualization:  
	I1003 19:39:48.414161  486090 out.go:179] * [embed-certs-327416] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1003 19:39:48.417360  486090 notify.go:220] Checking for updates...
	I1003 19:39:48.420766  486090 out.go:179]   - MINIKUBE_LOCATION=21625
	I1003 19:39:48.423562  486090 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 19:39:48.426471  486090 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21625-284583/kubeconfig
	I1003 19:39:48.429299  486090 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-284583/.minikube
	I1003 19:39:48.432144  486090 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1003 19:39:48.434908  486090 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 19:39:48.438199  486090 config.go:182] Loaded profile config "embed-certs-327416": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 19:39:48.438763  486090 driver.go:421] Setting default libvirt URI to qemu:///system
	I1003 19:39:48.468865  486090 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1003 19:39:48.468982  486090 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 19:39:48.574762  486090 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:43 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-03 19:39:48.564624002 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1003 19:39:48.574873  486090 docker.go:318] overlay module found
	I1003 19:39:48.577891  486090 out.go:179] * Using the docker driver based on existing profile
	I1003 19:39:48.580702  486090 start.go:304] selected driver: docker
	I1003 19:39:48.580794  486090 start.go:924] validating driver "docker" against &{Name:embed-certs-327416 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-327416 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 19:39:48.580912  486090 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 19:39:48.581602  486090 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 19:39:48.690084  486090 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:43 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-03 19:39:48.680014635 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1003 19:39:48.690427  486090 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 19:39:48.690462  486090 cni.go:84] Creating CNI manager for ""
	I1003 19:39:48.690528  486090 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 19:39:48.690573  486090 start.go:348] cluster config:
	{Name:embed-certs-327416 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-327416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 19:39:48.695599  486090 out.go:179] * Starting "embed-certs-327416" primary control-plane node in "embed-certs-327416" cluster
	I1003 19:39:48.698376  486090 cache.go:123] Beginning downloading kic base image for docker with crio
	I1003 19:39:48.701351  486090 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1003 19:39:48.704158  486090 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 19:39:48.704218  486090 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21625-284583/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1003 19:39:48.704233  486090 cache.go:58] Caching tarball of preloaded images
	I1003 19:39:48.704331  486090 preload.go:233] Found /home/jenkins/minikube-integration/21625-284583/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1003 19:39:48.704347  486090 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1003 19:39:48.704461  486090 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/embed-certs-327416/config.json ...
	I1003 19:39:48.704686  486090 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1003 19:39:48.732663  486090 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1003 19:39:48.732691  486090 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1003 19:39:48.732705  486090 cache.go:232] Successfully downloaded all kic artifacts
	I1003 19:39:48.732781  486090 start.go:360] acquireMachinesLock for embed-certs-327416: {Name:mk5dc758d01b8c5f84eccb23a8f0d09c618d844f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 19:39:48.732861  486090 start.go:364] duration metric: took 46.983µs to acquireMachinesLock for "embed-certs-327416"
	I1003 19:39:48.732887  486090 start.go:96] Skipping create...Using existing machine configuration
	I1003 19:39:48.732898  486090 fix.go:54] fixHost starting: 
	I1003 19:39:48.733155  486090 cli_runner.go:164] Run: docker container inspect embed-certs-327416 --format={{.State.Status}}
	I1003 19:39:48.759430  486090 fix.go:112] recreateIfNeeded on embed-certs-327416: state=Stopped err=<nil>
	W1003 19:39:48.759482  486090 fix.go:138] unexpected machine state, will restart: <nil>
	I1003 19:39:46.683083  483467 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1003 19:39:47.215411  483467 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1003 19:39:47.871927  483467 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1003 19:39:47.872204  483467 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1003 19:39:48.377112  483467 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1003 19:39:48.713076  483467 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1003 19:39:49.977727  483467 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1003 19:39:51.424340  483467 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1003 19:39:51.621915  483467 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1003 19:39:51.622717  483467 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1003 19:39:51.625492  483467 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1003 19:39:48.762885  486090 out.go:252] * Restarting existing docker container for "embed-certs-327416" ...
	I1003 19:39:48.762994  486090 cli_runner.go:164] Run: docker start embed-certs-327416
	I1003 19:39:49.052325  486090 cli_runner.go:164] Run: docker container inspect embed-certs-327416 --format={{.State.Status}}
	I1003 19:39:49.084435  486090 kic.go:430] container "embed-certs-327416" state is running.
	I1003 19:39:49.084868  486090 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-327416
	I1003 19:39:49.114499  486090 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/embed-certs-327416/config.json ...
	I1003 19:39:49.114727  486090 machine.go:93] provisionDockerMachine start ...
	I1003 19:39:49.114787  486090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-327416
	I1003 19:39:49.145479  486090 main.go:141] libmachine: Using SSH client type: native
	I1003 19:39:49.145801  486090 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1003 19:39:49.145817  486090 main.go:141] libmachine: About to run SSH command:
	hostname
	I1003 19:39:49.148254  486090 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1003 19:39:52.301067  486090 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-327416
	
	I1003 19:39:52.301112  486090 ubuntu.go:182] provisioning hostname "embed-certs-327416"
	I1003 19:39:52.301199  486090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-327416
	I1003 19:39:52.324448  486090 main.go:141] libmachine: Using SSH client type: native
	I1003 19:39:52.324837  486090 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1003 19:39:52.324852  486090 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-327416 && echo "embed-certs-327416" | sudo tee /etc/hostname
	I1003 19:39:52.487798  486090 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-327416
	
	I1003 19:39:52.487940  486090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-327416
	I1003 19:39:52.514479  486090 main.go:141] libmachine: Using SSH client type: native
	I1003 19:39:52.514791  486090 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1003 19:39:52.514808  486090 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-327416' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-327416/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-327416' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 19:39:52.653185  486090 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 19:39:52.653276  486090 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21625-284583/.minikube CaCertPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21625-284583/.minikube}
	I1003 19:39:52.653328  486090 ubuntu.go:190] setting up certificates
	I1003 19:39:52.653364  486090 provision.go:84] configureAuth start
	I1003 19:39:52.653456  486090 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-327416
	I1003 19:39:52.678375  486090 provision.go:143] copyHostCerts
	I1003 19:39:52.678440  486090 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-284583/.minikube/key.pem, removing ...
	I1003 19:39:52.678458  486090 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-284583/.minikube/key.pem
	I1003 19:39:52.678535  486090 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21625-284583/.minikube/key.pem (1675 bytes)
	I1003 19:39:52.678640  486090 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-284583/.minikube/ca.pem, removing ...
	I1003 19:39:52.678645  486090 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-284583/.minikube/ca.pem
	I1003 19:39:52.678677  486090 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21625-284583/.minikube/ca.pem (1082 bytes)
	I1003 19:39:52.678741  486090 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-284583/.minikube/cert.pem, removing ...
	I1003 19:39:52.678746  486090 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-284583/.minikube/cert.pem
	I1003 19:39:52.678772  486090 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21625-284583/.minikube/cert.pem (1123 bytes)
	I1003 19:39:52.678828  486090 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21625-284583/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca-key.pem org=jenkins.embed-certs-327416 san=[127.0.0.1 192.168.85.2 embed-certs-327416 localhost minikube]
	I1003 19:39:51.629007  483467 out.go:252]   - Booting up control plane ...
	I1003 19:39:51.629116  483467 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1003 19:39:51.629198  483467 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1003 19:39:51.629269  483467 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1003 19:39:51.644565  483467 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1003 19:39:51.644972  483467 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1003 19:39:51.653263  483467 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1003 19:39:51.653582  483467 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1003 19:39:51.653644  483467 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1003 19:39:51.783065  483467 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1003 19:39:51.783191  483467 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1003 19:39:53.787581  483467 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 2.001752936s
	I1003 19:39:53.787705  483467 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1003 19:39:53.787791  483467 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8444/livez
	I1003 19:39:53.787885  483467 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1003 19:39:53.787968  483467 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1003 19:39:53.585518  486090 provision.go:177] copyRemoteCerts
	I1003 19:39:53.585643  486090 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 19:39:53.585729  486090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-327416
	I1003 19:39:53.604907  486090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/embed-certs-327416/id_rsa Username:docker}
	I1003 19:39:53.704464  486090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 19:39:53.726033  486090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1003 19:39:53.744396  486090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1003 19:39:53.762518  486090 provision.go:87] duration metric: took 1.109120098s to configureAuth
	I1003 19:39:53.762544  486090 ubuntu.go:206] setting minikube options for container-runtime
	I1003 19:39:53.762724  486090 config.go:182] Loaded profile config "embed-certs-327416": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 19:39:53.762831  486090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-327416
	I1003 19:39:53.780425  486090 main.go:141] libmachine: Using SSH client type: native
	I1003 19:39:53.780782  486090 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1003 19:39:53.780806  486090 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1003 19:39:54.178466  486090 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1003 19:39:54.178541  486090 machine.go:96] duration metric: took 5.063804675s to provisionDockerMachine
	I1003 19:39:54.178585  486090 start.go:293] postStartSetup for "embed-certs-327416" (driver="docker")
	I1003 19:39:54.178623  486090 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 19:39:54.178728  486090 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 19:39:54.178799  486090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-327416
	I1003 19:39:54.202940  486090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/embed-certs-327416/id_rsa Username:docker}
	I1003 19:39:54.322166  486090 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 19:39:54.325643  486090 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1003 19:39:54.325718  486090 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1003 19:39:54.325743  486090 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-284583/.minikube/addons for local assets ...
	I1003 19:39:54.325828  486090 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-284583/.minikube/files for local assets ...
	I1003 19:39:54.325965  486090 filesync.go:149] local asset: /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem -> 2864342.pem in /etc/ssl/certs
	I1003 19:39:54.326115  486090 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 19:39:54.338999  486090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem --> /etc/ssl/certs/2864342.pem (1708 bytes)
	I1003 19:39:54.366124  486090 start.go:296] duration metric: took 187.497578ms for postStartSetup
	I1003 19:39:54.366211  486090 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 19:39:54.366257  486090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-327416
	I1003 19:39:54.403022  486090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/embed-certs-327416/id_rsa Username:docker}
	I1003 19:39:54.506155  486090 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1003 19:39:54.514166  486090 fix.go:56] duration metric: took 5.781259919s for fixHost
	I1003 19:39:54.514192  486090 start.go:83] releasing machines lock for "embed-certs-327416", held for 5.781315928s
	I1003 19:39:54.514274  486090 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-327416
	I1003 19:39:54.546329  486090 ssh_runner.go:195] Run: cat /version.json
	I1003 19:39:54.546384  486090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-327416
	I1003 19:39:54.546646  486090 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 19:39:54.546702  486090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-327416
	I1003 19:39:54.575481  486090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/embed-certs-327416/id_rsa Username:docker}
	I1003 19:39:54.596971  486090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/embed-certs-327416/id_rsa Username:docker}
	I1003 19:39:54.688394  486090 ssh_runner.go:195] Run: systemctl --version
	I1003 19:39:54.806335  486090 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1003 19:39:54.885166  486090 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1003 19:39:54.889961  486090 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 19:39:54.890031  486090 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 19:39:54.900767  486090 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1003 19:39:54.900792  486090 start.go:495] detecting cgroup driver to use...
	I1003 19:39:54.900823  486090 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1003 19:39:54.900878  486090 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 19:39:54.922992  486090 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 19:39:54.945562  486090 docker.go:218] disabling cri-docker service (if available) ...
	I1003 19:39:54.945624  486090 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1003 19:39:54.965809  486090 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1003 19:39:54.984754  486090 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1003 19:39:55.213902  486090 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1003 19:39:55.431424  486090 docker.go:234] disabling docker service ...
	I1003 19:39:55.431504  486090 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1003 19:39:55.453523  486090 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1003 19:39:55.466989  486090 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1003 19:39:55.658467  486090 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1003 19:39:55.840577  486090 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1003 19:39:55.866618  486090 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 19:39:55.890570  486090 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1003 19:39:55.890719  486090 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:39:55.904611  486090 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1003 19:39:55.904773  486090 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:39:55.921983  486090 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:39:55.937691  486090 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:39:55.950924  486090 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 19:39:55.965449  486090 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:39:55.978736  486090 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:39:55.994168  486090 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:39:56.010522  486090 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 19:39:56.026080  486090 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 19:39:56.046911  486090 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 19:39:56.268510  486090 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1003 19:39:56.482429  486090 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1003 19:39:56.482495  486090 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1003 19:39:56.487915  486090 start.go:563] Will wait 60s for crictl version
	I1003 19:39:56.488031  486090 ssh_runner.go:195] Run: which crictl
	I1003 19:39:56.496915  486090 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1003 19:39:56.545778  486090 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1003 19:39:56.545860  486090 ssh_runner.go:195] Run: crio --version
	I1003 19:39:56.590615  486090 ssh_runner.go:195] Run: crio --version
	I1003 19:39:56.645731  486090 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1003 19:39:56.648492  486090 cli_runner.go:164] Run: docker network inspect embed-certs-327416 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 19:39:56.668953  486090 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1003 19:39:56.672973  486090 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 19:39:56.686588  486090 kubeadm.go:883] updating cluster {Name:embed-certs-327416 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-327416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1003 19:39:56.686711  486090 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 19:39:56.686761  486090 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 19:39:56.744749  486090 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 19:39:56.744773  486090 crio.go:433] Images already preloaded, skipping extraction
	I1003 19:39:56.744827  486090 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 19:39:56.793917  486090 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 19:39:56.793941  486090 cache_images.go:85] Images are preloaded, skipping loading
	I1003 19:39:56.793949  486090 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1003 19:39:56.794052  486090 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-327416 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-327416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1003 19:39:56.794141  486090 ssh_runner.go:195] Run: crio config
	I1003 19:39:56.902732  486090 cni.go:84] Creating CNI manager for ""
	I1003 19:39:56.902755  486090 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 19:39:56.902775  486090 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1003 19:39:56.902797  486090 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-327416 NodeName:embed-certs-327416 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1003 19:39:56.902923  486090 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-327416"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 19:39:56.903001  486090 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1003 19:39:56.923431  486090 binaries.go:44] Found k8s binaries, skipping transfer
	I1003 19:39:56.923502  486090 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1003 19:39:56.933656  486090 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1003 19:39:56.954918  486090 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 19:39:56.990889  486090 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1003 19:39:57.007988  486090 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1003 19:39:57.012607  486090 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 19:39:57.031100  486090 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 19:39:57.219401  486090 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 19:39:57.236340  486090 certs.go:69] Setting up /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/embed-certs-327416 for IP: 192.168.85.2
	I1003 19:39:57.236370  486090 certs.go:195] generating shared ca certs ...
	I1003 19:39:57.236386  486090 certs.go:227] acquiring lock for ca certs: {Name:mk5a10e6c921326e9c211447576eaeb893259ba7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:39:57.236537  486090 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21625-284583/.minikube/ca.key
	I1003 19:39:57.236585  486090 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.key
	I1003 19:39:57.236605  486090 certs.go:257] generating profile certs ...
	I1003 19:39:57.236708  486090 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/embed-certs-327416/client.key
	I1003 19:39:57.236794  486090 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/embed-certs-327416/apiserver.key.00090923
	I1003 19:39:57.236851  486090 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/embed-certs-327416/proxy-client.key
	I1003 19:39:57.236993  486090 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/286434.pem (1338 bytes)
	W1003 19:39:57.237029  486090 certs.go:480] ignoring /home/jenkins/minikube-integration/21625-284583/.minikube/certs/286434_empty.pem, impossibly tiny 0 bytes
	I1003 19:39:57.237049  486090 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 19:39:57.237080  486090 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem (1082 bytes)
	I1003 19:39:57.237128  486090 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem (1123 bytes)
	I1003 19:39:57.237159  486090 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/key.pem (1675 bytes)
	I1003 19:39:57.237214  486090 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem (1708 bytes)
	I1003 19:39:57.237861  486090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 19:39:57.277658  486090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1003 19:39:57.334048  486090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 19:39:57.391162  486090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1003 19:39:57.431674  486090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/embed-certs-327416/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1003 19:39:57.465806  486090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/embed-certs-327416/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1003 19:39:57.485328  486090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/embed-certs-327416/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 19:39:57.527173  486090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/embed-certs-327416/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1003 19:39:57.586733  486090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 19:39:57.653883  486090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/certs/286434.pem --> /usr/share/ca-certificates/286434.pem (1338 bytes)
	I1003 19:39:57.684024  486090 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem --> /usr/share/ca-certificates/2864342.pem (1708 bytes)
	I1003 19:39:57.713347  486090 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 19:39:57.738914  486090 ssh_runner.go:195] Run: openssl version
	I1003 19:39:57.749213  486090 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 19:39:57.761879  486090 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 19:39:57.765847  486090 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  3 18:27 /usr/share/ca-certificates/minikubeCA.pem
	I1003 19:39:57.765929  486090 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 19:39:57.808221  486090 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 19:39:57.818722  486090 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/286434.pem && ln -fs /usr/share/ca-certificates/286434.pem /etc/ssl/certs/286434.pem"
	I1003 19:39:57.829386  486090 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/286434.pem
	I1003 19:39:57.833463  486090 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  3 18:34 /usr/share/ca-certificates/286434.pem
	I1003 19:39:57.833537  486090 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/286434.pem
	I1003 19:39:57.879415  486090 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/286434.pem /etc/ssl/certs/51391683.0"
	I1003 19:39:57.893919  486090 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2864342.pem && ln -fs /usr/share/ca-certificates/2864342.pem /etc/ssl/certs/2864342.pem"
	I1003 19:39:57.902841  486090 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2864342.pem
	I1003 19:39:57.911381  486090 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  3 18:34 /usr/share/ca-certificates/2864342.pem
	I1003 19:39:57.911479  486090 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2864342.pem
	I1003 19:39:57.970764  486090 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2864342.pem /etc/ssl/certs/3ec20f2e.0"
	I1003 19:39:57.983014  486090 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 19:39:57.987442  486090 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1003 19:39:58.074463  486090 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1003 19:39:58.165704  486090 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1003 19:39:58.226081  486090 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1003 19:39:58.401526  486090 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1003 19:39:58.581476  486090 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1003 19:39:58.733586  486090 kubeadm.go:400] StartCluster: {Name:embed-certs-327416 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-327416 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 19:39:58.733686  486090 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1003 19:39:58.733771  486090 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1003 19:39:58.833897  486090 cri.go:89] found id: "7251d8be4bbe1feadb8d7586aad5c359dbd66fd31d01b439cbe4b247e9edacb9"
	I1003 19:39:58.833932  486090 cri.go:89] found id: "d175d98dcd2f4aad68e57c312506a537fcec4add7ab32b2ffa4c3126efd41601"
	I1003 19:39:58.833941  486090 cri.go:89] found id: "58e88d8c2849a5437eb7767eb255d61ad53372f61e98f7b15fba814d13e38b12"
	I1003 19:39:58.833945  486090 cri.go:89] found id: "0c6c5a56f754c48cee635b6a3f179cd14335b49d4105c542ea8de2a52f7a1289"
	I1003 19:39:58.833948  486090 cri.go:89] found id: ""
	I1003 19:39:58.834021  486090 ssh_runner.go:195] Run: sudo runc list -f json
	W1003 19:39:58.862452  486090 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-03T19:39:58Z" level=error msg="open /run/runc: no such file or directory"
	I1003 19:39:58.862563  486090 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 19:39:58.881833  486090 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1003 19:39:58.881877  486090 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1003 19:39:58.881934  486090 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1003 19:39:58.906826  486090 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1003 19:39:58.907284  486090 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-327416" does not appear in /home/jenkins/minikube-integration/21625-284583/kubeconfig
	I1003 19:39:58.907438  486090 kubeconfig.go:62] /home/jenkins/minikube-integration/21625-284583/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-327416" cluster setting kubeconfig missing "embed-certs-327416" context setting]
	I1003 19:39:58.907756  486090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/kubeconfig: {Name:mkc1323fd87f4a78231a26d2dab0dff7feecf1e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:39:58.909428  486090 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1003 19:39:58.935594  486090 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1003 19:39:58.935643  486090 kubeadm.go:601] duration metric: took 53.7523ms to restartPrimaryControlPlane
	I1003 19:39:58.935653  486090 kubeadm.go:402] duration metric: took 202.077841ms to StartCluster
	I1003 19:39:58.935668  486090 settings.go:142] acquiring lock: {Name:mkc95577dbc448e3409dfa2b5e53a3a1327cb451 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:39:58.935742  486090 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21625-284583/kubeconfig
	I1003 19:39:58.936816  486090 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/kubeconfig: {Name:mkc1323fd87f4a78231a26d2dab0dff7feecf1e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:39:58.937055  486090 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1003 19:39:58.937428  486090 config.go:182] Loaded profile config "embed-certs-327416": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 19:39:58.937414  486090 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1003 19:39:58.937540  486090 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-327416"
	I1003 19:39:58.937555  486090 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-327416"
	W1003 19:39:58.937563  486090 addons.go:247] addon storage-provisioner should already be in state true
	I1003 19:39:58.937587  486090 host.go:66] Checking if "embed-certs-327416" exists ...
	I1003 19:39:58.938039  486090 cli_runner.go:164] Run: docker container inspect embed-certs-327416 --format={{.State.Status}}
	I1003 19:39:58.938228  486090 addons.go:69] Setting dashboard=true in profile "embed-certs-327416"
	I1003 19:39:58.938271  486090 addons.go:238] Setting addon dashboard=true in "embed-certs-327416"
	W1003 19:39:58.938295  486090 addons.go:247] addon dashboard should already be in state true
	I1003 19:39:58.938333  486090 host.go:66] Checking if "embed-certs-327416" exists ...
	I1003 19:39:58.938551  486090 addons.go:69] Setting default-storageclass=true in profile "embed-certs-327416"
	I1003 19:39:58.938575  486090 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-327416"
	I1003 19:39:58.938822  486090 cli_runner.go:164] Run: docker container inspect embed-certs-327416 --format={{.State.Status}}
	I1003 19:39:58.938874  486090 cli_runner.go:164] Run: docker container inspect embed-certs-327416 --format={{.State.Status}}
	I1003 19:39:58.941853  486090 out.go:179] * Verifying Kubernetes components...
	I1003 19:39:58.950884  486090 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 19:39:58.986927  486090 addons.go:238] Setting addon default-storageclass=true in "embed-certs-327416"
	W1003 19:39:58.986955  486090 addons.go:247] addon default-storageclass should already be in state true
	I1003 19:39:58.986979  486090 host.go:66] Checking if "embed-certs-327416" exists ...
	I1003 19:39:58.987392  486090 cli_runner.go:164] Run: docker container inspect embed-certs-327416 --format={{.State.Status}}
	I1003 19:39:59.004130  486090 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 19:39:59.004245  486090 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1003 19:39:59.008098  486090 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1003 19:39:59.008230  486090 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 19:39:59.008244  486090 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1003 19:39:59.008323  486090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-327416
	I1003 19:39:59.012800  486090 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1003 19:39:59.012836  486090 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1003 19:39:59.012908  486090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-327416
	I1003 19:39:59.033523  486090 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1003 19:39:59.033550  486090 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1003 19:39:59.033617  486090 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-327416
	I1003 19:39:59.064924  486090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/embed-certs-327416/id_rsa Username:docker}
	I1003 19:39:59.072820  486090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/embed-certs-327416/id_rsa Username:docker}
	I1003 19:39:59.084949  486090 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/embed-certs-327416/id_rsa Username:docker}
	I1003 19:39:59.470877  486090 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 19:39:59.525123  486090 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1003 19:39:59.525145  486090 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1003 19:39:59.553366  486090 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1003 19:39:59.584068  486090 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 19:39:59.605388  486090 node_ready.go:35] waiting up to 6m0s for node "embed-certs-327416" to be "Ready" ...
	I1003 19:39:59.636677  486090 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1003 19:39:59.636707  486090 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1003 19:39:59.773937  486090 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1003 19:39:59.773962  486090 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1003 19:39:59.875189  486090 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1003 19:39:59.875215  486090 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1003 19:40:00.018543  486090 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1003 19:40:00.018572  486090 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1003 19:40:00.149227  486090 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1003 19:40:00.149256  486090 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1003 19:40:00.224486  486090 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1003 19:40:00.224517  486090 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1003 19:40:00.294180  486090 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1003 19:40:00.294208  486090 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1003 19:40:00.352071  486090 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1003 19:40:00.352100  486090 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1003 19:40:00.386881  486090 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1003 19:40:01.159781  483467 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 7.371754693s
	I1003 19:40:02.716838  483467 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 8.929305815s
	I1003 19:40:03.790589  483467 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 10.002833133s
	I1003 19:40:03.813779  483467 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1003 19:40:03.837694  483467 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1003 19:40:03.851798  483467 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1003 19:40:03.852007  483467 kubeadm.go:318] [mark-control-plane] Marking the node default-k8s-diff-port-842797 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1003 19:40:03.869722  483467 kubeadm.go:318] [bootstrap-token] Using token: t3ldah.09tb2yxkfmma6h8c
	I1003 19:40:03.872779  483467 out.go:252]   - Configuring RBAC rules ...
	I1003 19:40:03.872900  483467 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1003 19:40:03.884948  483467 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1003 19:40:03.901207  483467 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1003 19:40:03.908547  483467 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1003 19:40:03.913073  483467 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1003 19:40:03.917154  483467 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1003 19:40:04.197193  483467 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1003 19:40:04.757783  483467 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1003 19:40:05.209674  483467 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1003 19:40:05.211324  483467 kubeadm.go:318] 
	I1003 19:40:05.211413  483467 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1003 19:40:05.211419  483467 kubeadm.go:318] 
	I1003 19:40:05.211500  483467 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1003 19:40:05.211505  483467 kubeadm.go:318] 
	I1003 19:40:05.211531  483467 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1003 19:40:05.212032  483467 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1003 19:40:05.212101  483467 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1003 19:40:05.212107  483467 kubeadm.go:318] 
	I1003 19:40:05.212168  483467 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1003 19:40:05.212173  483467 kubeadm.go:318] 
	I1003 19:40:05.212222  483467 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1003 19:40:05.212226  483467 kubeadm.go:318] 
	I1003 19:40:05.212281  483467 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1003 19:40:05.212359  483467 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1003 19:40:05.212430  483467 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1003 19:40:05.212434  483467 kubeadm.go:318] 
	I1003 19:40:05.212810  483467 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1003 19:40:05.212971  483467 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1003 19:40:05.213004  483467 kubeadm.go:318] 
	I1003 19:40:05.213316  483467 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8444 --token t3ldah.09tb2yxkfmma6h8c \
	I1003 19:40:05.213430  483467 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:f66ff31263aa4cda6b17caa2076838d6a1918275f1c2773b90b119c0d4a4d71a \
	I1003 19:40:05.213658  483467 kubeadm.go:318] 	--control-plane 
	I1003 19:40:05.213669  483467 kubeadm.go:318] 
	I1003 19:40:05.213984  483467 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1003 19:40:05.213994  483467 kubeadm.go:318] 
	I1003 19:40:05.214295  483467 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8444 --token t3ldah.09tb2yxkfmma6h8c \
	I1003 19:40:05.214607  483467 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:f66ff31263aa4cda6b17caa2076838d6a1918275f1c2773b90b119c0d4a4d71a 
	I1003 19:40:05.224654  483467 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1003 19:40:05.224895  483467 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1003 19:40:05.224999  483467 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1003 19:40:05.225016  483467 cni.go:84] Creating CNI manager for ""
	I1003 19:40:05.225027  483467 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 19:40:05.228397  483467 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1003 19:40:05.231268  483467 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1003 19:40:05.241725  483467 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1003 19:40:05.241744  483467 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1003 19:40:05.285091  483467 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1003 19:40:05.899280  483467 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1003 19:40:05.899430  483467 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 19:40:05.899506  483467 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-842797 minikube.k8s.io/updated_at=2025_10_03T19_40_05_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=a43873c79fc22f8b1ccd29d3dfa635d392b09335 minikube.k8s.io/name=default-k8s-diff-port-842797 minikube.k8s.io/primary=true
	I1003 19:40:05.886238  486090 node_ready.go:49] node "embed-certs-327416" is "Ready"
	I1003 19:40:05.886265  486090 node_ready.go:38] duration metric: took 6.280845633s for node "embed-certs-327416" to be "Ready" ...
	I1003 19:40:05.886279  486090 api_server.go:52] waiting for apiserver process to appear ...
	I1003 19:40:05.886356  486090 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 19:40:06.569129  486090 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.015719388s)
	I1003 19:40:08.376120  486090 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.791956347s)
	I1003 19:40:08.376388  486090 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.490019166s)
	I1003 19:40:08.376405  486090 api_server.go:72] duration metric: took 9.439311864s to wait for apiserver process to appear ...
	I1003 19:40:08.376412  486090 api_server.go:88] waiting for apiserver healthz status ...
	I1003 19:40:08.376429  486090 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1003 19:40:08.376351  486090 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.98943414s)
	I1003 19:40:08.380053  486090 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-327416 addons enable metrics-server
	
	I1003 19:40:08.383081  486090 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1003 19:40:08.386051  486090 addons.go:514] duration metric: took 9.448611445s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1003 19:40:08.388956  486090 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1003 19:40:08.391422  486090 api_server.go:141] control plane version: v1.34.1
	I1003 19:40:08.391445  486090 api_server.go:131] duration metric: took 15.027324ms to wait for apiserver health ...
	I1003 19:40:08.391454  486090 system_pods.go:43] waiting for kube-system pods to appear ...
	I1003 19:40:08.395808  486090 system_pods.go:59] 8 kube-system pods found
	I1003 19:40:08.395893  486090 system_pods.go:61] "coredns-66bc5c9577-bjdpd" [17c509e4-9d58-4e2e-9a05-3e6eb361dc8a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1003 19:40:08.395917  486090 system_pods.go:61] "etcd-embed-certs-327416" [292d87c6-b170-473c-94eb-33bf1ec95a97] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1003 19:40:08.395955  486090 system_pods.go:61] "kindnet-2jswv" [b05191d5-b4b3-42d6-8488-25e3b30ad1a1] Running
	I1003 19:40:08.395983  486090 system_pods.go:61] "kube-apiserver-embed-certs-327416" [da030608-0739-46db-a5c1-bd540ab4a19a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1003 19:40:08.396007  486090 system_pods.go:61] "kube-controller-manager-embed-certs-327416" [5b0e00b7-6093-4c79-a1a2-2b21160b65dd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1003 19:40:08.396040  486090 system_pods.go:61] "kube-proxy-ncw55" [54ac7a9a-424b-4c7e-94a8-5a15bc1d91c2] Running
	I1003 19:40:08.396065  486090 system_pods.go:61] "kube-scheduler-embed-certs-327416" [86958be9-5e24-4927-80fd-8e2101189244] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1003 19:40:08.396084  486090 system_pods.go:61] "storage-provisioner" [b02f2aae-4045-452f-aaac-e4bf1daea610] Running
	I1003 19:40:08.396121  486090 system_pods.go:74] duration metric: took 4.659911ms to wait for pod list to return data ...
	I1003 19:40:08.396146  486090 default_sa.go:34] waiting for default service account to be created ...
	I1003 19:40:08.400509  486090 default_sa.go:45] found service account: "default"
	I1003 19:40:08.400584  486090 default_sa.go:55] duration metric: took 4.415666ms for default service account to be created ...
	I1003 19:40:08.400607  486090 system_pods.go:116] waiting for k8s-apps to be running ...
	I1003 19:40:08.404384  486090 system_pods.go:86] 8 kube-system pods found
	I1003 19:40:08.404464  486090 system_pods.go:89] "coredns-66bc5c9577-bjdpd" [17c509e4-9d58-4e2e-9a05-3e6eb361dc8a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1003 19:40:08.404488  486090 system_pods.go:89] "etcd-embed-certs-327416" [292d87c6-b170-473c-94eb-33bf1ec95a97] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1003 19:40:08.404506  486090 system_pods.go:89] "kindnet-2jswv" [b05191d5-b4b3-42d6-8488-25e3b30ad1a1] Running
	I1003 19:40:08.404543  486090 system_pods.go:89] "kube-apiserver-embed-certs-327416" [da030608-0739-46db-a5c1-bd540ab4a19a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1003 19:40:08.404569  486090 system_pods.go:89] "kube-controller-manager-embed-certs-327416" [5b0e00b7-6093-4c79-a1a2-2b21160b65dd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1003 19:40:08.404588  486090 system_pods.go:89] "kube-proxy-ncw55" [54ac7a9a-424b-4c7e-94a8-5a15bc1d91c2] Running
	I1003 19:40:08.404626  486090 system_pods.go:89] "kube-scheduler-embed-certs-327416" [86958be9-5e24-4927-80fd-8e2101189244] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1003 19:40:08.404680  486090 system_pods.go:89] "storage-provisioner" [b02f2aae-4045-452f-aaac-e4bf1daea610] Running
	I1003 19:40:08.404716  486090 system_pods.go:126] duration metric: took 4.089794ms to wait for k8s-apps to be running ...
	I1003 19:40:08.404755  486090 system_svc.go:44] waiting for kubelet service to be running ....
	I1003 19:40:08.404845  486090 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 19:40:06.365683  483467 ops.go:34] apiserver oom_adj: -16
	I1003 19:40:06.365786  483467 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 19:40:06.865955  483467 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 19:40:07.365876  483467 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 19:40:07.866780  483467 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 19:40:08.366254  483467 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 19:40:08.866211  483467 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 19:40:09.366456  483467 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 19:40:09.866524  483467 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 19:40:10.365874  483467 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 19:40:10.865924  483467 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 19:40:11.070612  483467 kubeadm.go:1113] duration metric: took 5.171241197s to wait for elevateKubeSystemPrivileges
	I1003 19:40:11.070646  483467 kubeadm.go:402] duration metric: took 30.84337732s to StartCluster
	I1003 19:40:11.070665  483467 settings.go:142] acquiring lock: {Name:mkc95577dbc448e3409dfa2b5e53a3a1327cb451 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:40:11.070736  483467 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21625-284583/kubeconfig
	I1003 19:40:11.072308  483467 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/kubeconfig: {Name:mkc1323fd87f4a78231a26d2dab0dff7feecf1e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:40:11.072574  483467 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1003 19:40:11.072846  483467 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1003 19:40:11.073142  483467 config.go:182] Loaded profile config "default-k8s-diff-port-842797": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 19:40:11.073193  483467 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1003 19:40:11.073265  483467 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-842797"
	I1003 19:40:11.073285  483467 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-842797"
	I1003 19:40:11.073314  483467 host.go:66] Checking if "default-k8s-diff-port-842797" exists ...
	I1003 19:40:11.073780  483467 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-842797 --format={{.State.Status}}
	I1003 19:40:11.074196  483467 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-842797"
	I1003 19:40:11.074219  483467 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-842797"
	I1003 19:40:11.074496  483467 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-842797 --format={{.State.Status}}
	I1003 19:40:11.076248  483467 out.go:179] * Verifying Kubernetes components...
	I1003 19:40:11.080359  483467 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 19:40:11.129153  483467 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 19:40:11.133820  483467 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-842797"
	I1003 19:40:11.133864  483467 host.go:66] Checking if "default-k8s-diff-port-842797" exists ...
	I1003 19:40:11.134307  483467 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-842797 --format={{.State.Status}}
	I1003 19:40:11.134469  483467 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 19:40:11.134487  483467 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1003 19:40:11.134526  483467 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-842797
	I1003 19:40:11.188251  483467 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1003 19:40:11.188271  483467 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1003 19:40:11.188332  483467 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-842797
	I1003 19:40:11.192700  483467 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/default-k8s-diff-port-842797/id_rsa Username:docker}
	I1003 19:40:11.220860  483467 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/default-k8s-diff-port-842797/id_rsa Username:docker}
	I1003 19:40:11.449266  483467 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1003 19:40:11.449441  483467 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 19:40:11.560602  483467 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1003 19:40:11.572512  483467 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 19:40:12.019928  483467 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-842797" to be "Ready" ...
	I1003 19:40:12.020370  483467 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1003 19:40:12.405145  483467 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1003 19:40:08.432130  486090 system_svc.go:56] duration metric: took 27.367059ms WaitForService to wait for kubelet
	I1003 19:40:08.432211  486090 kubeadm.go:586] duration metric: took 9.495114568s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 19:40:08.432262  486090 node_conditions.go:102] verifying NodePressure condition ...
	I1003 19:40:08.436492  486090 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1003 19:40:08.436569  486090 node_conditions.go:123] node cpu capacity is 2
	I1003 19:40:08.436603  486090 node_conditions.go:105] duration metric: took 4.322651ms to run NodePressure ...
	I1003 19:40:08.436652  486090 start.go:241] waiting for startup goroutines ...
	I1003 19:40:08.436679  486090 start.go:246] waiting for cluster config update ...
	I1003 19:40:08.436708  486090 start.go:255] writing updated cluster config ...
	I1003 19:40:08.437083  486090 ssh_runner.go:195] Run: rm -f paused
	I1003 19:40:08.441513  486090 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1003 19:40:08.500160  486090 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-bjdpd" in "kube-system" namespace to be "Ready" or be gone ...
	W1003 19:40:10.557912  486090 pod_ready.go:104] pod "coredns-66bc5c9577-bjdpd" is not "Ready", error: <nil>
	W1003 19:40:13.015714  486090 pod_ready.go:104] pod "coredns-66bc5c9577-bjdpd" is not "Ready", error: <nil>
	I1003 19:40:12.408846  483467 addons.go:514] duration metric: took 1.335628448s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1003 19:40:12.526333  483467 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-842797" context rescaled to 1 replicas
	W1003 19:40:14.024181  483467 node_ready.go:57] node "default-k8s-diff-port-842797" has "Ready":"False" status (will retry)
	W1003 19:40:15.511756  486090 pod_ready.go:104] pod "coredns-66bc5c9577-bjdpd" is not "Ready", error: <nil>
	W1003 19:40:17.517009  486090 pod_ready.go:104] pod "coredns-66bc5c9577-bjdpd" is not "Ready", error: <nil>
	W1003 19:40:16.524209  483467 node_ready.go:57] node "default-k8s-diff-port-842797" has "Ready":"False" status (will retry)
	W1003 19:40:19.024225  483467 node_ready.go:57] node "default-k8s-diff-port-842797" has "Ready":"False" status (will retry)
	W1003 19:40:20.009685  486090 pod_ready.go:104] pod "coredns-66bc5c9577-bjdpd" is not "Ready", error: <nil>
	W1003 19:40:22.011900  486090 pod_ready.go:104] pod "coredns-66bc5c9577-bjdpd" is not "Ready", error: <nil>
	W1003 19:40:21.523293  483467 node_ready.go:57] node "default-k8s-diff-port-842797" has "Ready":"False" status (will retry)
	W1003 19:40:23.523534  483467 node_ready.go:57] node "default-k8s-diff-port-842797" has "Ready":"False" status (will retry)
	W1003 19:40:25.523686  483467 node_ready.go:57] node "default-k8s-diff-port-842797" has "Ready":"False" status (will retry)
	W1003 19:40:24.506338  486090 pod_ready.go:104] pod "coredns-66bc5c9577-bjdpd" is not "Ready", error: <nil>
	W1003 19:40:27.008049  486090 pod_ready.go:104] pod "coredns-66bc5c9577-bjdpd" is not "Ready", error: <nil>
	W1003 19:40:28.023461  483467 node_ready.go:57] node "default-k8s-diff-port-842797" has "Ready":"False" status (will retry)
	W1003 19:40:30.025923  483467 node_ready.go:57] node "default-k8s-diff-port-842797" has "Ready":"False" status (will retry)
	W1003 19:40:29.008193  486090 pod_ready.go:104] pod "coredns-66bc5c9577-bjdpd" is not "Ready", error: <nil>
	W1003 19:40:31.507528  486090 pod_ready.go:104] pod "coredns-66bc5c9577-bjdpd" is not "Ready", error: <nil>
	W1003 19:40:32.523028  483467 node_ready.go:57] node "default-k8s-diff-port-842797" has "Ready":"False" status (will retry)
	W1003 19:40:34.523316  483467 node_ready.go:57] node "default-k8s-diff-port-842797" has "Ready":"False" status (will retry)
	W1003 19:40:34.011446  486090 pod_ready.go:104] pod "coredns-66bc5c9577-bjdpd" is not "Ready", error: <nil>
	W1003 19:40:36.505721  486090 pod_ready.go:104] pod "coredns-66bc5c9577-bjdpd" is not "Ready", error: <nil>
	W1003 19:40:37.023775  483467 node_ready.go:57] node "default-k8s-diff-port-842797" has "Ready":"False" status (will retry)
	W1003 19:40:39.523284  483467 node_ready.go:57] node "default-k8s-diff-port-842797" has "Ready":"False" status (will retry)
	W1003 19:40:38.508590  486090 pod_ready.go:104] pod "coredns-66bc5c9577-bjdpd" is not "Ready", error: <nil>
	I1003 19:40:39.508669  486090 pod_ready.go:94] pod "coredns-66bc5c9577-bjdpd" is "Ready"
	I1003 19:40:39.508702  486090 pod_ready.go:86] duration metric: took 31.008454932s for pod "coredns-66bc5c9577-bjdpd" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:40:39.511349  486090 pod_ready.go:83] waiting for pod "etcd-embed-certs-327416" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:40:39.516072  486090 pod_ready.go:94] pod "etcd-embed-certs-327416" is "Ready"
	I1003 19:40:39.516099  486090 pod_ready.go:86] duration metric: took 4.722724ms for pod "etcd-embed-certs-327416" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:40:39.518442  486090 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-327416" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:40:39.524871  486090 pod_ready.go:94] pod "kube-apiserver-embed-certs-327416" is "Ready"
	I1003 19:40:39.524898  486090 pod_ready.go:86] duration metric: took 6.427628ms for pod "kube-apiserver-embed-certs-327416" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:40:39.527447  486090 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-327416" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:40:39.704173  486090 pod_ready.go:94] pod "kube-controller-manager-embed-certs-327416" is "Ready"
	I1003 19:40:39.704202  486090 pod_ready.go:86] duration metric: took 176.734521ms for pod "kube-controller-manager-embed-certs-327416" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:40:39.904691  486090 pod_ready.go:83] waiting for pod "kube-proxy-ncw55" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:40:40.303788  486090 pod_ready.go:94] pod "kube-proxy-ncw55" is "Ready"
	I1003 19:40:40.303818  486090 pod_ready.go:86] duration metric: took 399.10123ms for pod "kube-proxy-ncw55" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:40:40.505055  486090 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-327416" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:40:40.904471  486090 pod_ready.go:94] pod "kube-scheduler-embed-certs-327416" is "Ready"
	I1003 19:40:40.904502  486090 pod_ready.go:86] duration metric: took 399.421096ms for pod "kube-scheduler-embed-certs-327416" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:40:40.904515  486090 pod_ready.go:40] duration metric: took 32.462920533s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1003 19:40:40.956798  486090 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1003 19:40:40.959742  486090 out.go:179] * Done! kubectl is now configured to use "embed-certs-327416" cluster and "default" namespace by default
	W1003 19:40:41.523805  483467 node_ready.go:57] node "default-k8s-diff-port-842797" has "Ready":"False" status (will retry)
	W1003 19:40:44.024139  483467 node_ready.go:57] node "default-k8s-diff-port-842797" has "Ready":"False" status (will retry)
	W1003 19:40:46.523635  483467 node_ready.go:57] node "default-k8s-diff-port-842797" has "Ready":"False" status (will retry)
	W1003 19:40:48.523943  483467 node_ready.go:57] node "default-k8s-diff-port-842797" has "Ready":"False" status (will retry)
	W1003 19:40:50.524217  483467 node_ready.go:57] node "default-k8s-diff-port-842797" has "Ready":"False" status (will retry)
	I1003 19:40:52.027836  483467 node_ready.go:49] node "default-k8s-diff-port-842797" is "Ready"
	I1003 19:40:52.027862  483467 node_ready.go:38] duration metric: took 40.007850149s for node "default-k8s-diff-port-842797" to be "Ready" ...
	I1003 19:40:52.027877  483467 api_server.go:52] waiting for apiserver process to appear ...
	I1003 19:40:52.027944  483467 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 19:40:52.040014  483467 api_server.go:72] duration metric: took 40.967403235s to wait for apiserver process to appear ...
	I1003 19:40:52.040039  483467 api_server.go:88] waiting for apiserver healthz status ...
	I1003 19:40:52.040072  483467 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1003 19:40:52.048508  483467 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1003 19:40:52.049767  483467 api_server.go:141] control plane version: v1.34.1
	I1003 19:40:52.049795  483467 api_server.go:131] duration metric: took 9.749928ms to wait for apiserver health ...
	I1003 19:40:52.049805  483467 system_pods.go:43] waiting for kube-system pods to appear ...
	I1003 19:40:52.053328  483467 system_pods.go:59] 8 kube-system pods found
	I1003 19:40:52.053364  483467 system_pods.go:61] "coredns-66bc5c9577-l8knz" [20442eef-faaa-4dfb-bd27-e8f4fda45d0e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1003 19:40:52.053396  483467 system_pods.go:61] "etcd-default-k8s-diff-port-842797" [8db70af0-84e1-42e2-8676-3db2f2732f13] Running
	I1003 19:40:52.053412  483467 system_pods.go:61] "kindnet-96q8s" [ab4664bf-01c0-4b62-9eb8-f65194dff517] Running
	I1003 19:40:52.053417  483467 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-842797" [c7b2a799-b6f6-4be1-a67c-d603d2a8cd7e] Running
	I1003 19:40:52.053427  483467 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-842797" [44ec1bf9-f1e3-4342-bd43-2202ff291aeb] Running
	I1003 19:40:52.053443  483467 system_pods.go:61] "kube-proxy-gvslj" [3cfa5fdd-13b6-4c43-aa02-a74c256ceed2] Running
	I1003 19:40:52.053449  483467 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-842797" [6aba1d05-eec7-4030-b4ee-2b39cd76ec2a] Running
	I1003 19:40:52.053471  483467 system_pods.go:61] "storage-provisioner" [e700db76-d3d4-422f-8069-cb3a0b9ebe86] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1003 19:40:52.053479  483467 system_pods.go:74] duration metric: took 3.669276ms to wait for pod list to return data ...
	I1003 19:40:52.053489  483467 default_sa.go:34] waiting for default service account to be created ...
	I1003 19:40:52.056280  483467 default_sa.go:45] found service account: "default"
	I1003 19:40:52.056303  483467 default_sa.go:55] duration metric: took 2.805279ms for default service account to be created ...
	I1003 19:40:52.056313  483467 system_pods.go:116] waiting for k8s-apps to be running ...
	I1003 19:40:52.059677  483467 system_pods.go:86] 8 kube-system pods found
	I1003 19:40:52.059777  483467 system_pods.go:89] "coredns-66bc5c9577-l8knz" [20442eef-faaa-4dfb-bd27-e8f4fda45d0e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1003 19:40:52.059816  483467 system_pods.go:89] "etcd-default-k8s-diff-port-842797" [8db70af0-84e1-42e2-8676-3db2f2732f13] Running
	I1003 19:40:52.059837  483467 system_pods.go:89] "kindnet-96q8s" [ab4664bf-01c0-4b62-9eb8-f65194dff517] Running
	I1003 19:40:52.059875  483467 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-842797" [c7b2a799-b6f6-4be1-a67c-d603d2a8cd7e] Running
	I1003 19:40:52.059901  483467 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-842797" [44ec1bf9-f1e3-4342-bd43-2202ff291aeb] Running
	I1003 19:40:52.059924  483467 system_pods.go:89] "kube-proxy-gvslj" [3cfa5fdd-13b6-4c43-aa02-a74c256ceed2] Running
	I1003 19:40:52.059958  483467 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-842797" [6aba1d05-eec7-4030-b4ee-2b39cd76ec2a] Running
	I1003 19:40:52.059997  483467 system_pods.go:89] "storage-provisioner" [e700db76-d3d4-422f-8069-cb3a0b9ebe86] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1003 19:40:52.060442  483467 retry.go:31] will retry after 261.053866ms: missing components: kube-dns
	I1003 19:40:52.329752  483467 system_pods.go:86] 8 kube-system pods found
	I1003 19:40:52.329800  483467 system_pods.go:89] "coredns-66bc5c9577-l8knz" [20442eef-faaa-4dfb-bd27-e8f4fda45d0e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1003 19:40:52.329808  483467 system_pods.go:89] "etcd-default-k8s-diff-port-842797" [8db70af0-84e1-42e2-8676-3db2f2732f13] Running
	I1003 19:40:52.329815  483467 system_pods.go:89] "kindnet-96q8s" [ab4664bf-01c0-4b62-9eb8-f65194dff517] Running
	I1003 19:40:52.329820  483467 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-842797" [c7b2a799-b6f6-4be1-a67c-d603d2a8cd7e] Running
	I1003 19:40:52.329824  483467 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-842797" [44ec1bf9-f1e3-4342-bd43-2202ff291aeb] Running
	I1003 19:40:52.329829  483467 system_pods.go:89] "kube-proxy-gvslj" [3cfa5fdd-13b6-4c43-aa02-a74c256ceed2] Running
	I1003 19:40:52.329833  483467 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-842797" [6aba1d05-eec7-4030-b4ee-2b39cd76ec2a] Running
	I1003 19:40:52.329838  483467 system_pods.go:89] "storage-provisioner" [e700db76-d3d4-422f-8069-cb3a0b9ebe86] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1003 19:40:52.329865  483467 retry.go:31] will retry after 323.054015ms: missing components: kube-dns
	I1003 19:40:52.656855  483467 system_pods.go:86] 8 kube-system pods found
	I1003 19:40:52.656883  483467 system_pods.go:89] "coredns-66bc5c9577-l8knz" [20442eef-faaa-4dfb-bd27-e8f4fda45d0e] Running
	I1003 19:40:52.656890  483467 system_pods.go:89] "etcd-default-k8s-diff-port-842797" [8db70af0-84e1-42e2-8676-3db2f2732f13] Running
	I1003 19:40:52.656896  483467 system_pods.go:89] "kindnet-96q8s" [ab4664bf-01c0-4b62-9eb8-f65194dff517] Running
	I1003 19:40:52.656901  483467 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-842797" [c7b2a799-b6f6-4be1-a67c-d603d2a8cd7e] Running
	I1003 19:40:52.656906  483467 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-842797" [44ec1bf9-f1e3-4342-bd43-2202ff291aeb] Running
	I1003 19:40:52.656912  483467 system_pods.go:89] "kube-proxy-gvslj" [3cfa5fdd-13b6-4c43-aa02-a74c256ceed2] Running
	I1003 19:40:52.656916  483467 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-842797" [6aba1d05-eec7-4030-b4ee-2b39cd76ec2a] Running
	I1003 19:40:52.656921  483467 system_pods.go:89] "storage-provisioner" [e700db76-d3d4-422f-8069-cb3a0b9ebe86] Running
	I1003 19:40:52.656928  483467 system_pods.go:126] duration metric: took 600.610578ms to wait for k8s-apps to be running ...
	I1003 19:40:52.656936  483467 system_svc.go:44] waiting for kubelet service to be running ....
	I1003 19:40:52.656997  483467 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 19:40:52.673541  483467 system_svc.go:56] duration metric: took 16.594891ms WaitForService to wait for kubelet
	I1003 19:40:52.673568  483467 kubeadm.go:586] duration metric: took 41.60096318s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 19:40:52.673585  483467 node_conditions.go:102] verifying NodePressure condition ...
	I1003 19:40:52.677840  483467 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1003 19:40:52.677870  483467 node_conditions.go:123] node cpu capacity is 2
	I1003 19:40:52.677883  483467 node_conditions.go:105] duration metric: took 4.29262ms to run NodePressure ...
	I1003 19:40:52.677895  483467 start.go:241] waiting for startup goroutines ...
	I1003 19:40:52.677903  483467 start.go:246] waiting for cluster config update ...
	I1003 19:40:52.677914  483467 start.go:255] writing updated cluster config ...
	I1003 19:40:52.678211  483467 ssh_runner.go:195] Run: rm -f paused
	I1003 19:40:52.681908  483467 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1003 19:40:52.757076  483467 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-l8knz" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:40:52.762984  483467 pod_ready.go:94] pod "coredns-66bc5c9577-l8knz" is "Ready"
	I1003 19:40:52.763010  483467 pod_ready.go:86] duration metric: took 5.909523ms for pod "coredns-66bc5c9577-l8knz" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:40:52.765790  483467 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-842797" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:40:52.770961  483467 pod_ready.go:94] pod "etcd-default-k8s-diff-port-842797" is "Ready"
	I1003 19:40:52.770981  483467 pod_ready.go:86] duration metric: took 5.173988ms for pod "etcd-default-k8s-diff-port-842797" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:40:52.774100  483467 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-842797" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:40:52.779416  483467 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-842797" is "Ready"
	I1003 19:40:52.779438  483467 pod_ready.go:86] duration metric: took 5.315413ms for pod "kube-apiserver-default-k8s-diff-port-842797" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:40:52.782100  483467 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-842797" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:40:53.086517  483467 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-842797" is "Ready"
	I1003 19:40:53.086550  483467 pod_ready.go:86] duration metric: took 304.4295ms for pod "kube-controller-manager-default-k8s-diff-port-842797" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:40:53.290421  483467 pod_ready.go:83] waiting for pod "kube-proxy-gvslj" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:40:53.686648  483467 pod_ready.go:94] pod "kube-proxy-gvslj" is "Ready"
	I1003 19:40:53.686681  483467 pod_ready.go:86] duration metric: took 396.235813ms for pod "kube-proxy-gvslj" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:40:53.887166  483467 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-842797" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:40:54.286893  483467 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-842797" is "Ready"
	I1003 19:40:54.286916  483467 pod_ready.go:86] duration metric: took 399.662262ms for pod "kube-scheduler-default-k8s-diff-port-842797" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:40:54.286928  483467 pod_ready.go:40] duration metric: took 1.60498969s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1003 19:40:54.371232  483467 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1003 19:40:54.375125  483467 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-842797" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 03 19:40:37 embed-certs-327416 crio[646]: time="2025-10-03T19:40:37.903001254Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=c133263a-f5e8-4a1d-8698-3fa93541c765 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 19:40:37 embed-certs-327416 crio[646]: time="2025-10-03T19:40:37.908705506Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=a19c2d01-8e86-44ca-8d1a-e4a0d4343abc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:40:37 embed-certs-327416 crio[646]: time="2025-10-03T19:40:37.909385771Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 19:40:37 embed-certs-327416 crio[646]: time="2025-10-03T19:40:37.916409593Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 19:40:37 embed-certs-327416 crio[646]: time="2025-10-03T19:40:37.916576791Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/18900d5caa3a3b4bad306d9a2cea9e3a63d7de638d707e02a5586f6e1ee15d9d/merged/etc/passwd: no such file or directory"
	Oct 03 19:40:37 embed-certs-327416 crio[646]: time="2025-10-03T19:40:37.916598478Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/18900d5caa3a3b4bad306d9a2cea9e3a63d7de638d707e02a5586f6e1ee15d9d/merged/etc/group: no such file or directory"
	Oct 03 19:40:37 embed-certs-327416 crio[646]: time="2025-10-03T19:40:37.916870572Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 19:40:37 embed-certs-327416 crio[646]: time="2025-10-03T19:40:37.932542231Z" level=info msg="Created container 5e66dc6a1481b362f77729de7b87a40c80a9b3559f540b5a8bd6f55ec6c8f731: kube-system/storage-provisioner/storage-provisioner" id=a19c2d01-8e86-44ca-8d1a-e4a0d4343abc name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:40:37 embed-certs-327416 crio[646]: time="2025-10-03T19:40:37.937049862Z" level=info msg="Starting container: 5e66dc6a1481b362f77729de7b87a40c80a9b3559f540b5a8bd6f55ec6c8f731" id=b9a42191-8cd4-42b6-9c14-44c97c14a514 name=/runtime.v1.RuntimeService/StartContainer
	Oct 03 19:40:37 embed-certs-327416 crio[646]: time="2025-10-03T19:40:37.94088819Z" level=info msg="Started container" PID=1632 containerID=5e66dc6a1481b362f77729de7b87a40c80a9b3559f540b5a8bd6f55ec6c8f731 description=kube-system/storage-provisioner/storage-provisioner id=b9a42191-8cd4-42b6-9c14-44c97c14a514 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a08ec79b430293fc18fe391a6d7e109bd33baac133eb712f4cc8e57ccb685f26
	Oct 03 19:40:47 embed-certs-327416 crio[646]: time="2025-10-03T19:40:47.542787424Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 03 19:40:47 embed-certs-327416 crio[646]: time="2025-10-03T19:40:47.547313935Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 03 19:40:47 embed-certs-327416 crio[646]: time="2025-10-03T19:40:47.547345246Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 03 19:40:47 embed-certs-327416 crio[646]: time="2025-10-03T19:40:47.547368122Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 03 19:40:47 embed-certs-327416 crio[646]: time="2025-10-03T19:40:47.550707333Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 03 19:40:47 embed-certs-327416 crio[646]: time="2025-10-03T19:40:47.550746004Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 03 19:40:47 embed-certs-327416 crio[646]: time="2025-10-03T19:40:47.550769324Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 03 19:40:47 embed-certs-327416 crio[646]: time="2025-10-03T19:40:47.55389578Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 03 19:40:47 embed-certs-327416 crio[646]: time="2025-10-03T19:40:47.554060164Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 03 19:40:47 embed-certs-327416 crio[646]: time="2025-10-03T19:40:47.554096916Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 03 19:40:47 embed-certs-327416 crio[646]: time="2025-10-03T19:40:47.55746185Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 03 19:40:47 embed-certs-327416 crio[646]: time="2025-10-03T19:40:47.557497362Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 03 19:40:47 embed-certs-327416 crio[646]: time="2025-10-03T19:40:47.557522249Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 03 19:40:47 embed-certs-327416 crio[646]: time="2025-10-03T19:40:47.561498023Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 03 19:40:47 embed-certs-327416 crio[646]: time="2025-10-03T19:40:47.561537244Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	5e66dc6a1481b       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           20 seconds ago      Running             storage-provisioner         2                   a08ec79b43029       storage-provisioner                          kube-system
	a738125ff91fa       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           25 seconds ago      Exited              dashboard-metrics-scraper   2                   5c4eb3421c96a       dashboard-metrics-scraper-6ffb444bf9-pdwhc   kubernetes-dashboard
	a789d122b33c0       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   41 seconds ago      Running             kubernetes-dashboard        0                   a3dcd7fd1edef       kubernetes-dashboard-855c9754f9-4hzk6        kubernetes-dashboard
	f08f692651a4c       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           50 seconds ago      Running             coredns                     1                   2e5cbd6315354       coredns-66bc5c9577-bjdpd                     kube-system
	f4b23575b27ca       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           50 seconds ago      Running             busybox                     1                   8c58b36d1d8ac       busybox                                      default
	e082ac152bed0       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           50 seconds ago      Running             kube-proxy                  1                   cda0e0a01f05e       kube-proxy-ncw55                             kube-system
	feab4d04b3ff4       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           51 seconds ago      Running             kindnet-cni                 1                   61aece1b64bff       kindnet-2jswv                                kube-system
	a099b0263e1ca       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           51 seconds ago      Exited              storage-provisioner         1                   a08ec79b43029       storage-provisioner                          kube-system
	7251d8be4bbe1       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           59 seconds ago      Running             kube-controller-manager     1                   c300579b36ce0       kube-controller-manager-embed-certs-327416   kube-system
	d175d98dcd2f4       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           59 seconds ago      Running             kube-apiserver              1                   c899c8cde9b07       kube-apiserver-embed-certs-327416            kube-system
	58e88d8c2849a       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           59 seconds ago      Running             kube-scheduler              1                   096ab3d677b68       kube-scheduler-embed-certs-327416            kube-system
	0c6c5a56f754c       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           59 seconds ago      Running             etcd                        1                   3060be50efc34       etcd-embed-certs-327416                      kube-system
	
	
	==> coredns [f08f692651a4c24dbc7f5c2d01b62f4b3444fe292b2f5c83c3522aac293a2680] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:32920 - 42303 "HINFO IN 3835729374393202696.7808947450009168741. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.024206034s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-327416
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-327416
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a43873c79fc22f8b1ccd29d3dfa635d392b09335
	                    minikube.k8s.io/name=embed-certs-327416
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_03T19_38_33_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 03 Oct 2025 19:38:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-327416
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 03 Oct 2025 19:40:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 03 Oct 2025 19:40:36 +0000   Fri, 03 Oct 2025 19:38:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 03 Oct 2025 19:40:36 +0000   Fri, 03 Oct 2025 19:38:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 03 Oct 2025 19:40:36 +0000   Fri, 03 Oct 2025 19:38:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 03 Oct 2025 19:40:36 +0000   Fri, 03 Oct 2025 19:39:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-327416
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c7fbf585ecd44806b77018469dd7b7db
	  System UUID:                fb79a29c-023c-4bd8-a646-01fac5e931e0
	  Boot ID:                    3762136e-8bec-4104-a5cb-0b1976f6048e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-66bc5c9577-bjdpd                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m21s
	  kube-system                 etcd-embed-certs-327416                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m26s
	  kube-system                 kindnet-2jswv                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m21s
	  kube-system                 kube-apiserver-embed-certs-327416             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m26s
	  kube-system                 kube-controller-manager-embed-certs-327416    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m29s
	  kube-system                 kube-proxy-ncw55                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 kube-scheduler-embed-certs-327416             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m26s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m20s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-pdwhc    0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-4hzk6         0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m18s                  kube-proxy       
	  Normal   Starting                 49s                    kube-proxy       
	  Normal   NodeHasSufficientMemory  2m35s (x8 over 2m35s)  kubelet          Node embed-certs-327416 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m35s (x8 over 2m35s)  kubelet          Node embed-certs-327416 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m35s (x8 over 2m35s)  kubelet          Node embed-certs-327416 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m26s                  kubelet          Node embed-certs-327416 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m26s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m26s                  kubelet          Node embed-certs-327416 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m26s                  kubelet          Node embed-certs-327416 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m26s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m22s                  node-controller  Node embed-certs-327416 event: Registered Node embed-certs-327416 in Controller
	  Normal   NodeReady                99s                    kubelet          Node embed-certs-327416 status is now: NodeReady
	  Normal   Starting                 61s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 61s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  61s (x8 over 61s)      kubelet          Node embed-certs-327416 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    61s (x8 over 61s)      kubelet          Node embed-certs-327416 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     61s (x8 over 61s)      kubelet          Node embed-certs-327416 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           48s                    node-controller  Node embed-certs-327416 event: Registered Node embed-certs-327416 in Controller
	
	
	==> dmesg <==
	[Oct 3 19:11] overlayfs: idmapped layers are currently not supported
	[  +4.287643] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:12] overlayfs: idmapped layers are currently not supported
	[ +24.839009] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:13] overlayfs: idmapped layers are currently not supported
	[ +26.493253] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:15] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:16] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:17] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000010] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Oct 3 19:18] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:20] overlayfs: idmapped layers are currently not supported
	[ +32.018892] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:22] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:24] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:26] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:32] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:34] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:35] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:36] overlayfs: idmapped layers are currently not supported
	[  +4.740983] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:38] overlayfs: idmapped layers are currently not supported
	[ +12.897300] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:39] overlayfs: idmapped layers are currently not supported
	[  +4.104516] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [0c6c5a56f754c48cee635b6a3f179cd14335b49d4105c542ea8de2a52f7a1289] <==
	{"level":"warn","ts":"2025-10-03T19:40:03.275933Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:40:03.341638Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:40:03.402043Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:40:03.432981Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:40:03.497717Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:40:03.537558Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:40:03.584906Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:40:03.626142Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:40:03.660932Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:40:03.707810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:40:03.766595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:40:03.901767Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:40:03.937810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:40:03.969646Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:40:04.008508Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:40:04.025004Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:40:04.047218Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:40:04.071160Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:40:04.107014Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:40:04.148382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:40:04.192365Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:40:04.243472Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:40:04.305534Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:40:04.356821Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:40:04.523305Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43074","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 19:40:58 up  2:23,  0 user,  load average: 3.72, 3.25, 2.42
	Linux embed-certs-327416 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [feab4d04b3ff4dcec9c7a34ced7bd215e07b33afff0b593771ec98a30d1421e9] <==
	I1003 19:40:07.328195       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1003 19:40:07.328692       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1003 19:40:07.335212       1 main.go:148] setting mtu 1500 for CNI 
	I1003 19:40:07.335244       1 main.go:178] kindnetd IP family: "ipv4"
	I1003 19:40:07.335262       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-03T19:40:07Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1003 19:40:07.538932       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1003 19:40:07.538949       1 controller.go:381] "Waiting for informer caches to sync"
	I1003 19:40:07.538957       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1003 19:40:07.539234       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1003 19:40:37.538870       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1003 19:40:37.538997       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1003 19:40:37.539878       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1003 19:40:37.562395       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1003 19:40:39.039980       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1003 19:40:39.040015       1 metrics.go:72] Registering metrics
	I1003 19:40:39.040087       1 controller.go:711] "Syncing nftables rules"
	I1003 19:40:47.542416       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1003 19:40:47.542474       1 main.go:301] handling current node
	I1003 19:40:57.547469       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1003 19:40:57.547582       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d175d98dcd2f4aad68e57c312506a537fcec4add7ab32b2ffa4c3126efd41601] <==
	I1003 19:40:06.066518       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1003 19:40:06.066980       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1003 19:40:06.102218       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1003 19:40:06.102264       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1003 19:40:06.102294       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1003 19:40:06.102331       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1003 19:40:06.118651       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1003 19:40:06.138080       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1003 19:40:06.148906       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1003 19:40:06.150191       1 aggregator.go:171] initial CRD sync complete...
	I1003 19:40:06.150225       1 autoregister_controller.go:144] Starting autoregister controller
	I1003 19:40:06.150233       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1003 19:40:06.150240       1 cache.go:39] Caches are synced for autoregister controller
	E1003 19:40:06.294444       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1003 19:40:06.563465       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1003 19:40:06.681907       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1003 19:40:07.682858       1 controller.go:667] quota admission added evaluator for: namespaces
	I1003 19:40:07.857597       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1003 19:40:08.005782       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1003 19:40:08.094510       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1003 19:40:08.289518       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.56.130"}
	I1003 19:40:08.324167       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.225.1"}
	I1003 19:40:10.465384       1 controller.go:667] quota admission added evaluator for: endpoints
	I1003 19:40:10.573960       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1003 19:40:10.637625       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [7251d8be4bbe1feadb8d7586aad5c359dbd66fd31d01b439cbe4b247e9edacb9] <==
	I1003 19:40:10.217538       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1003 19:40:10.210395       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1003 19:40:10.221012       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1003 19:40:10.221689       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1003 19:40:10.222857       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1003 19:40:10.223442       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1003 19:40:10.224665       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1003 19:40:10.236547       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1003 19:40:10.239511       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1003 19:40:10.240869       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1003 19:40:10.243007       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1003 19:40:10.249390       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1003 19:40:10.251620       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1003 19:40:10.257402       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1003 19:40:10.257612       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1003 19:40:10.257682       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1003 19:40:10.263746       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1003 19:40:10.263866       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1003 19:40:10.268770       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1003 19:40:10.274922       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1003 19:40:10.276761       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1003 19:40:10.279087       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1003 19:40:10.309872       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1003 19:40:10.315811       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1003 19:40:10.315877       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	
	
	==> kube-proxy [e082ac152bed0226fa5fbaf16b5adae1367f37de196398b9aa393d4b2682c3bb] <==
	I1003 19:40:08.360892       1 server_linux.go:53] "Using iptables proxy"
	I1003 19:40:08.534450       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1003 19:40:08.642693       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1003 19:40:08.642816       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1003 19:40:08.646121       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1003 19:40:08.695242       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1003 19:40:08.695395       1 server_linux.go:132] "Using iptables Proxier"
	I1003 19:40:08.701460       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1003 19:40:08.715260       1 server.go:527] "Version info" version="v1.34.1"
	I1003 19:40:08.715295       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1003 19:40:08.717432       1 config.go:200] "Starting service config controller"
	I1003 19:40:08.717457       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1003 19:40:08.717491       1 config.go:106] "Starting endpoint slice config controller"
	I1003 19:40:08.717496       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1003 19:40:08.717514       1 config.go:403] "Starting serviceCIDR config controller"
	I1003 19:40:08.717518       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1003 19:40:08.718461       1 config.go:309] "Starting node config controller"
	I1003 19:40:08.718483       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1003 19:40:08.718491       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1003 19:40:08.825645       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1003 19:40:08.825757       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1003 19:40:08.825768       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [58e88d8c2849a5437eb7767eb255d61ad53372f61e98f7b15fba814d13e38b12] <==
	I1003 19:40:08.536036       1 serving.go:386] Generated self-signed cert in-memory
	I1003 19:40:10.452931       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1003 19:40:10.452979       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1003 19:40:10.476523       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1003 19:40:10.476626       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1003 19:40:10.476656       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1003 19:40:10.476694       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1003 19:40:10.483119       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1003 19:40:10.485599       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1003 19:40:10.485969       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1003 19:40:10.485979       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1003 19:40:10.577035       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1003 19:40:10.586684       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1003 19:40:10.586830       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Oct 03 19:40:10 embed-certs-327416 kubelet[771]: I1003 19:40:10.961023     771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/4e9fe78a-88e3-4ce0-9e2e-9e4442ab2967-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-4hzk6\" (UID: \"4e9fe78a-88e3-4ce0-9e2e-9e4442ab2967\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4hzk6"
	Oct 03 19:40:10 embed-certs-327416 kubelet[771]: I1003 19:40:10.961098     771 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wjrj\" (UniqueName: \"kubernetes.io/projected/4e9fe78a-88e3-4ce0-9e2e-9e4442ab2967-kube-api-access-6wjrj\") pod \"kubernetes-dashboard-855c9754f9-4hzk6\" (UID: \"4e9fe78a-88e3-4ce0-9e2e-9e4442ab2967\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4hzk6"
	Oct 03 19:40:11 embed-certs-327416 kubelet[771]: E1003 19:40:11.979567     771 projected.go:291] Couldn't get configMap kubernetes-dashboard/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Oct 03 19:40:11 embed-certs-327416 kubelet[771]: E1003 19:40:11.979634     771 projected.go:196] Error preparing data for projected volume kube-api-access-pt58f for pod kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pdwhc: failed to sync configmap cache: timed out waiting for the condition
	Oct 03 19:40:11 embed-certs-327416 kubelet[771]: E1003 19:40:11.979739     771 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5e6f44a6-6e24-4a4a-be20-79f9cfa4dc30-kube-api-access-pt58f podName:5e6f44a6-6e24-4a4a-be20-79f9cfa4dc30 nodeName:}" failed. No retries permitted until 2025-10-03 19:40:12.479707272 +0000 UTC m=+15.238358753 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pt58f" (UniqueName: "kubernetes.io/projected/5e6f44a6-6e24-4a4a-be20-79f9cfa4dc30-kube-api-access-pt58f") pod "dashboard-metrics-scraper-6ffb444bf9-pdwhc" (UID: "5e6f44a6-6e24-4a4a-be20-79f9cfa4dc30") : failed to sync configmap cache: timed out waiting for the condition
	Oct 03 19:40:12 embed-certs-327416 kubelet[771]: W1003 19:40:12.076595     771 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/7044b9fbdfefb3fd8bce7381adae2abdcd93d79fb8452cc72e2f26e58ccd8222/crio-a3dcd7fd1edefcc4b713308880237296fbe694ed3868c2fc919d67ecbf22e208 WatchSource:0}: Error finding container a3dcd7fd1edefcc4b713308880237296fbe694ed3868c2fc919d67ecbf22e208: Status 404 returned error can't find the container with id a3dcd7fd1edefcc4b713308880237296fbe694ed3868c2fc919d67ecbf22e208
	Oct 03 19:40:12 embed-certs-327416 kubelet[771]: W1003 19:40:12.657298     771 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/7044b9fbdfefb3fd8bce7381adae2abdcd93d79fb8452cc72e2f26e58ccd8222/crio-5c4eb3421c96a161a04d970875259c032e6566f813f81c5087e5b90315e4087e WatchSource:0}: Error finding container 5c4eb3421c96a161a04d970875259c032e6566f813f81c5087e5b90315e4087e: Status 404 returned error can't find the container with id 5c4eb3421c96a161a04d970875259c032e6566f813f81c5087e5b90315e4087e
	Oct 03 19:40:21 embed-certs-327416 kubelet[771]: I1003 19:40:21.852775     771 scope.go:117] "RemoveContainer" containerID="a29f167ca9d8327aa605d948ba460fdb021614a61d566ba513e53dbdfeeb2206"
	Oct 03 19:40:21 embed-certs-327416 kubelet[771]: I1003 19:40:21.883079     771 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4hzk6" podStartSLOduration=7.201842542 podStartE2EDuration="11.883060988s" podCreationTimestamp="2025-10-03 19:40:10 +0000 UTC" firstStartedPulling="2025-10-03 19:40:12.087076965 +0000 UTC m=+14.845728445" lastFinishedPulling="2025-10-03 19:40:16.76829541 +0000 UTC m=+19.526946891" observedRunningTime="2025-10-03 19:40:16.874420136 +0000 UTC m=+19.633071617" watchObservedRunningTime="2025-10-03 19:40:21.883060988 +0000 UTC m=+24.641712469"
	Oct 03 19:40:22 embed-certs-327416 kubelet[771]: I1003 19:40:22.856931     771 scope.go:117] "RemoveContainer" containerID="a29f167ca9d8327aa605d948ba460fdb021614a61d566ba513e53dbdfeeb2206"
	Oct 03 19:40:22 embed-certs-327416 kubelet[771]: I1003 19:40:22.857234     771 scope.go:117] "RemoveContainer" containerID="2bdaa5b7d0db718394917a8fcfe82c67f2bf8b9950ac1ba169c79c77673ff700"
	Oct 03 19:40:22 embed-certs-327416 kubelet[771]: E1003 19:40:22.858280     771 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-pdwhc_kubernetes-dashboard(5e6f44a6-6e24-4a4a-be20-79f9cfa4dc30)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pdwhc" podUID="5e6f44a6-6e24-4a4a-be20-79f9cfa4dc30"
	Oct 03 19:40:23 embed-certs-327416 kubelet[771]: I1003 19:40:23.861395     771 scope.go:117] "RemoveContainer" containerID="2bdaa5b7d0db718394917a8fcfe82c67f2bf8b9950ac1ba169c79c77673ff700"
	Oct 03 19:40:23 embed-certs-327416 kubelet[771]: E1003 19:40:23.861926     771 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-pdwhc_kubernetes-dashboard(5e6f44a6-6e24-4a4a-be20-79f9cfa4dc30)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pdwhc" podUID="5e6f44a6-6e24-4a4a-be20-79f9cfa4dc30"
	Oct 03 19:40:32 embed-certs-327416 kubelet[771]: I1003 19:40:32.582102     771 scope.go:117] "RemoveContainer" containerID="2bdaa5b7d0db718394917a8fcfe82c67f2bf8b9950ac1ba169c79c77673ff700"
	Oct 03 19:40:32 embed-certs-327416 kubelet[771]: I1003 19:40:32.883522     771 scope.go:117] "RemoveContainer" containerID="2bdaa5b7d0db718394917a8fcfe82c67f2bf8b9950ac1ba169c79c77673ff700"
	Oct 03 19:40:32 embed-certs-327416 kubelet[771]: I1003 19:40:32.883794     771 scope.go:117] "RemoveContainer" containerID="a738125ff91fa9557f957b47e040af0afc4e0c20eba8d133f0a7232ec66b0d66"
	Oct 03 19:40:32 embed-certs-327416 kubelet[771]: E1003 19:40:32.883955     771 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-pdwhc_kubernetes-dashboard(5e6f44a6-6e24-4a4a-be20-79f9cfa4dc30)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pdwhc" podUID="5e6f44a6-6e24-4a4a-be20-79f9cfa4dc30"
	Oct 03 19:40:37 embed-certs-327416 kubelet[771]: I1003 19:40:37.898146     771 scope.go:117] "RemoveContainer" containerID="a099b0263e1ca1acdf33e1af73c68951785e54c0ba213fdfbcb1bb8d81e98644"
	Oct 03 19:40:42 embed-certs-327416 kubelet[771]: I1003 19:40:42.581499     771 scope.go:117] "RemoveContainer" containerID="a738125ff91fa9557f957b47e040af0afc4e0c20eba8d133f0a7232ec66b0d66"
	Oct 03 19:40:42 embed-certs-327416 kubelet[771]: E1003 19:40:42.581690     771 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-pdwhc_kubernetes-dashboard(5e6f44a6-6e24-4a4a-be20-79f9cfa4dc30)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-pdwhc" podUID="5e6f44a6-6e24-4a4a-be20-79f9cfa4dc30"
	Oct 03 19:40:53 embed-certs-327416 kubelet[771]: I1003 19:40:53.206540     771 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 03 19:40:53 embed-certs-327416 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 03 19:40:53 embed-certs-327416 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 03 19:40:53 embed-certs-327416 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [a789d122b33c055f37ef455982128473a2a103a67ed53fffdb7d04275c3e1c56] <==
	2025/10/03 19:40:16 Starting overwatch
	2025/10/03 19:40:16 Using namespace: kubernetes-dashboard
	2025/10/03 19:40:16 Using in-cluster config to connect to apiserver
	2025/10/03 19:40:16 Using secret token for csrf signing
	2025/10/03 19:40:16 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/03 19:40:16 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/03 19:40:16 Successful initial request to the apiserver, version: v1.34.1
	2025/10/03 19:40:16 Generating JWE encryption key
	2025/10/03 19:40:16 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/03 19:40:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/03 19:40:18 Initializing JWE encryption key from synchronized object
	2025/10/03 19:40:18 Creating in-cluster Sidecar client
	2025/10/03 19:40:18 Serving insecurely on HTTP port: 9090
	2025/10/03 19:40:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/03 19:40:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [5e66dc6a1481b362f77729de7b87a40c80a9b3559f540b5a8bd6f55ec6c8f731] <==
	I1003 19:40:37.955845       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1003 19:40:37.970702       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1003 19:40:37.970834       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1003 19:40:37.973222       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:40:41.428441       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:40:45.688913       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:40:49.286982       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:40:52.340357       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:40:55.362911       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:40:55.370277       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1003 19:40:55.370438       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1003 19:40:55.370597       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-327416_e05b3a86-f980-4eb5-948f-9c7316119d8a!
	I1003 19:40:55.371213       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"879423f6-b7ff-450e-9e7a-f9f8ef1edeae", APIVersion:"v1", ResourceVersion:"685", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-327416_e05b3a86-f980-4eb5-948f-9c7316119d8a became leader
	W1003 19:40:55.375712       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:40:55.379609       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1003 19:40:55.471687       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-327416_e05b3a86-f980-4eb5-948f-9c7316119d8a!
	W1003 19:40:57.382135       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:40:57.389809       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [a099b0263e1ca1acdf33e1af73c68951785e54c0ba213fdfbcb1bb8d81e98644] <==
	I1003 19:40:07.689476       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1003 19:40:37.695190       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-327416 -n embed-certs-327416
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-327416 -n embed-certs-327416: exit status 2 (368.14003ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-327416 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (6.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (3.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-842797 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-842797 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (330.410571ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-03T19:41:03Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-842797 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-842797 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-842797 describe deploy/metrics-server -n kube-system: exit status 1 (131.858553ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-842797 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-842797
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-842797:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dd1cbce823c3c68d280f6d6431457674ab5e928f19effd4b41908fc33cc07deb",
	        "Created": "2025-10-03T19:39:31.38545341Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 483856,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-03T19:39:31.464464774Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/dd1cbce823c3c68d280f6d6431457674ab5e928f19effd4b41908fc33cc07deb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dd1cbce823c3c68d280f6d6431457674ab5e928f19effd4b41908fc33cc07deb/hostname",
	        "HostsPath": "/var/lib/docker/containers/dd1cbce823c3c68d280f6d6431457674ab5e928f19effd4b41908fc33cc07deb/hosts",
	        "LogPath": "/var/lib/docker/containers/dd1cbce823c3c68d280f6d6431457674ab5e928f19effd4b41908fc33cc07deb/dd1cbce823c3c68d280f6d6431457674ab5e928f19effd4b41908fc33cc07deb-json.log",
	        "Name": "/default-k8s-diff-port-842797",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-842797:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-842797",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dd1cbce823c3c68d280f6d6431457674ab5e928f19effd4b41908fc33cc07deb",
	                "LowerDir": "/var/lib/docker/overlay2/bbf4d12c39f5d56f33173d11971fd8a2d5507eec84c402825790261c2e06dc86-init/diff:/var/lib/docker/overlay2/87b205803817b0b71a214d995ab7e10a92033bbf72d76d6e052f1d21ccecb313/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bbf4d12c39f5d56f33173d11971fd8a2d5507eec84c402825790261c2e06dc86/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bbf4d12c39f5d56f33173d11971fd8a2d5507eec84c402825790261c2e06dc86/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bbf4d12c39f5d56f33173d11971fd8a2d5507eec84c402825790261c2e06dc86/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-842797",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-842797/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-842797",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-842797",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-842797",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "920d1db742b5a4db83c3a216c41fd858704b97f365a4f9dcaf0448df49d8738f",
	            "SandboxKey": "/var/run/docker/netns/920d1db742b5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33443"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33444"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33447"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33445"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33446"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-842797": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9e:74:c6:14:74:f1",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b6308a07ab6648978544dad609ae3a504a2e2942784508f1578ab5933d54e3b9",
	                    "EndpointID": "ebfeef01088d562f12ca1597d2312750f229813c633b02b451abdbec8cb2d4c0",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-842797",
	                        "dd1cbce823c3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-842797 -n default-k8s-diff-port-842797
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-842797 logs -n 25
E1003 19:41:04.926649  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/old-k8s-version-174543/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 19:41:04.933005  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/old-k8s-version-174543/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 19:41:04.944353  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/old-k8s-version-174543/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 19:41:04.966405  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/old-k8s-version-174543/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 19:41:05.008776  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/old-k8s-version-174543/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 19:41:05.090909  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/old-k8s-version-174543/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-842797 logs -n 25: (1.482626809s)
E1003 19:41:05.253008  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/old-k8s-version-174543/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ image   │ old-k8s-version-174543 image list --format=json                                                                                                                                                                                               │ old-k8s-version-174543       │ jenkins │ v1.37.0 │ 03 Oct 25 19:37 UTC │ 03 Oct 25 19:37 UTC │
	│ pause   │ -p old-k8s-version-174543 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-174543       │ jenkins │ v1.37.0 │ 03 Oct 25 19:37 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-643397 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-643397            │ jenkins │ v1.37.0 │ 03 Oct 25 19:37 UTC │                     │
	│ stop    │ -p no-preload-643397 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-643397            │ jenkins │ v1.37.0 │ 03 Oct 25 19:37 UTC │ 03 Oct 25 19:38 UTC │
	│ delete  │ -p old-k8s-version-174543                                                                                                                                                                                                                     │ old-k8s-version-174543       │ jenkins │ v1.37.0 │ 03 Oct 25 19:37 UTC │ 03 Oct 25 19:37 UTC │
	│ delete  │ -p old-k8s-version-174543                                                                                                                                                                                                                     │ old-k8s-version-174543       │ jenkins │ v1.37.0 │ 03 Oct 25 19:37 UTC │ 03 Oct 25 19:37 UTC │
	│ start   │ -p embed-certs-327416 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-327416           │ jenkins │ v1.37.0 │ 03 Oct 25 19:37 UTC │ 03 Oct 25 19:39 UTC │
	│ addons  │ enable dashboard -p no-preload-643397 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-643397            │ jenkins │ v1.37.0 │ 03 Oct 25 19:38 UTC │ 03 Oct 25 19:38 UTC │
	│ start   │ -p no-preload-643397 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-643397            │ jenkins │ v1.37.0 │ 03 Oct 25 19:38 UTC │ 03 Oct 25 19:39 UTC │
	│ image   │ no-preload-643397 image list --format=json                                                                                                                                                                                                    │ no-preload-643397            │ jenkins │ v1.37.0 │ 03 Oct 25 19:39 UTC │ 03 Oct 25 19:39 UTC │
	│ pause   │ -p no-preload-643397 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-643397            │ jenkins │ v1.37.0 │ 03 Oct 25 19:39 UTC │                     │
	│ delete  │ -p no-preload-643397                                                                                                                                                                                                                          │ no-preload-643397            │ jenkins │ v1.37.0 │ 03 Oct 25 19:39 UTC │ 03 Oct 25 19:39 UTC │
	│ delete  │ -p no-preload-643397                                                                                                                                                                                                                          │ no-preload-643397            │ jenkins │ v1.37.0 │ 03 Oct 25 19:39 UTC │ 03 Oct 25 19:39 UTC │
	│ delete  │ -p disable-driver-mounts-839513                                                                                                                                                                                                               │ disable-driver-mounts-839513 │ jenkins │ v1.37.0 │ 03 Oct 25 19:39 UTC │ 03 Oct 25 19:39 UTC │
	│ start   │ -p default-k8s-diff-port-842797 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-842797 │ jenkins │ v1.37.0 │ 03 Oct 25 19:39 UTC │ 03 Oct 25 19:40 UTC │
	│ addons  │ enable metrics-server -p embed-certs-327416 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-327416           │ jenkins │ v1.37.0 │ 03 Oct 25 19:39 UTC │                     │
	│ stop    │ -p embed-certs-327416 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-327416           │ jenkins │ v1.37.0 │ 03 Oct 25 19:39 UTC │ 03 Oct 25 19:39 UTC │
	│ addons  │ enable dashboard -p embed-certs-327416 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-327416           │ jenkins │ v1.37.0 │ 03 Oct 25 19:39 UTC │ 03 Oct 25 19:39 UTC │
	│ start   │ -p embed-certs-327416 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-327416           │ jenkins │ v1.37.0 │ 03 Oct 25 19:39 UTC │ 03 Oct 25 19:40 UTC │
	│ image   │ embed-certs-327416 image list --format=json                                                                                                                                                                                                   │ embed-certs-327416           │ jenkins │ v1.37.0 │ 03 Oct 25 19:40 UTC │ 03 Oct 25 19:40 UTC │
	│ pause   │ -p embed-certs-327416 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-327416           │ jenkins │ v1.37.0 │ 03 Oct 25 19:40 UTC │                     │
	│ delete  │ -p embed-certs-327416                                                                                                                                                                                                                         │ embed-certs-327416           │ jenkins │ v1.37.0 │ 03 Oct 25 19:40 UTC │ 03 Oct 25 19:41 UTC │
	│ delete  │ -p embed-certs-327416                                                                                                                                                                                                                         │ embed-certs-327416           │ jenkins │ v1.37.0 │ 03 Oct 25 19:41 UTC │ 03 Oct 25 19:41 UTC │
	│ start   │ -p newest-cni-277907 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-277907            │ jenkins │ v1.37.0 │ 03 Oct 25 19:41 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-842797 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-842797 │ jenkins │ v1.37.0 │ 03 Oct 25 19:41 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/03 19:41:02
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 19:41:02.044182  490346 out.go:360] Setting OutFile to fd 1 ...
	I1003 19:41:02.044370  490346 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 19:41:02.044399  490346 out.go:374] Setting ErrFile to fd 2...
	I1003 19:41:02.044421  490346 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 19:41:02.044694  490346 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-284583/.minikube/bin
	I1003 19:41:02.045189  490346 out.go:368] Setting JSON to false
	I1003 19:41:02.046188  490346 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8613,"bootTime":1759511849,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1003 19:41:02.046290  490346 start.go:140] virtualization:  
	I1003 19:41:02.050304  490346 out.go:179] * [newest-cni-277907] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1003 19:41:02.054726  490346 out.go:179]   - MINIKUBE_LOCATION=21625
	I1003 19:41:02.054812  490346 notify.go:220] Checking for updates...
	I1003 19:41:02.061310  490346 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 19:41:02.064529  490346 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21625-284583/kubeconfig
	I1003 19:41:02.067614  490346 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-284583/.minikube
	I1003 19:41:02.070695  490346 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1003 19:41:02.073954  490346 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 19:41:02.077481  490346 config.go:182] Loaded profile config "default-k8s-diff-port-842797": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 19:41:02.077613  490346 driver.go:421] Setting default libvirt URI to qemu:///system
	I1003 19:41:02.106973  490346 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1003 19:41:02.107092  490346 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 19:41:02.167097  490346 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-03 19:41:02.157573865 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1003 19:41:02.167205  490346 docker.go:318] overlay module found
	I1003 19:41:02.170597  490346 out.go:179] * Using the docker driver based on user configuration
	I1003 19:41:02.173609  490346 start.go:304] selected driver: docker
	I1003 19:41:02.173634  490346 start.go:924] validating driver "docker" against <nil>
	I1003 19:41:02.173649  490346 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 19:41:02.174436  490346 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 19:41:02.227207  490346 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-03 19:41:02.217881894 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1003 19:41:02.227386  490346 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1003 19:41:02.227422  490346 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1003 19:41:02.227689  490346 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1003 19:41:02.230732  490346 out.go:179] * Using Docker driver with root privileges
	I1003 19:41:02.233664  490346 cni.go:84] Creating CNI manager for ""
	I1003 19:41:02.233748  490346 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 19:41:02.233763  490346 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1003 19:41:02.233842  490346 start.go:348] cluster config:
	{Name:newest-cni-277907 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-277907 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 19:41:02.238921  490346 out.go:179] * Starting "newest-cni-277907" primary control-plane node in "newest-cni-277907" cluster
	I1003 19:41:02.241849  490346 cache.go:123] Beginning downloading kic base image for docker with crio
	I1003 19:41:02.244779  490346 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1003 19:41:02.247794  490346 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 19:41:02.247862  490346 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21625-284583/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1003 19:41:02.247878  490346 cache.go:58] Caching tarball of preloaded images
	I1003 19:41:02.247986  490346 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1003 19:41:02.248267  490346 preload.go:233] Found /home/jenkins/minikube-integration/21625-284583/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1003 19:41:02.248314  490346 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1003 19:41:02.248490  490346 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/newest-cni-277907/config.json ...
	I1003 19:41:02.248534  490346 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/newest-cni-277907/config.json: {Name:mk51b2174002342f76c2387cdadf832668cbf990 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:41:02.268295  490346 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1003 19:41:02.268322  490346 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1003 19:41:02.268342  490346 cache.go:232] Successfully downloaded all kic artifacts
	I1003 19:41:02.268365  490346 start.go:360] acquireMachinesLock for newest-cni-277907: {Name:mkd134b602e6b475d420a69856bbf9b26bf807b4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 19:41:02.268487  490346 start.go:364] duration metric: took 99.505µs to acquireMachinesLock for "newest-cni-277907"
	I1003 19:41:02.268517  490346 start.go:93] Provisioning new machine with config: &{Name:newest-cni-277907 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-277907 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1003 19:41:02.268604  490346 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> CRI-O <==
	Oct 03 19:40:52 default-k8s-diff-port-842797 crio[838]: time="2025-10-03T19:40:52.26012455Z" level=info msg="Created container 7ea0b6bea7258e23f5048aa34ecce79bb2ca187c297feffb840b514309773dae: kube-system/coredns-66bc5c9577-l8knz/coredns" id=ca48d2bd-d46e-4112-b04b-406f56c98ef4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:40:52 default-k8s-diff-port-842797 crio[838]: time="2025-10-03T19:40:52.261006418Z" level=info msg="Starting container: 7ea0b6bea7258e23f5048aa34ecce79bb2ca187c297feffb840b514309773dae" id=9a144379-1915-4bc7-8cd2-7de69bf90639 name=/runtime.v1.RuntimeService/StartContainer
	Oct 03 19:40:52 default-k8s-diff-port-842797 crio[838]: time="2025-10-03T19:40:52.264069078Z" level=info msg="Started container" PID=1757 containerID=7ea0b6bea7258e23f5048aa34ecce79bb2ca187c297feffb840b514309773dae description=kube-system/coredns-66bc5c9577-l8knz/coredns id=9a144379-1915-4bc7-8cd2-7de69bf90639 name=/runtime.v1.RuntimeService/StartContainer sandboxID=26251ed383176058e11a4207eaf6c8a5d9b49162f7ab8b4de24bad03a462fe3b
	Oct 03 19:40:54 default-k8s-diff-port-842797 crio[838]: time="2025-10-03T19:40:54.979521344Z" level=info msg="Running pod sandbox: default/busybox/POD" id=86300a8b-0eec-4def-87bc-17748974b892 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 03 19:40:54 default-k8s-diff-port-842797 crio[838]: time="2025-10-03T19:40:54.979597669Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 19:40:54 default-k8s-diff-port-842797 crio[838]: time="2025-10-03T19:40:54.990752221Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:7812e0f952d337f0e7f64eebe12cca85cba972b5bf2a4435ab8c0801c07474c9 UID:8e5137cd-0a54-45cf-a04a-251fab3a1832 NetNS:/var/run/netns/ea5e64f4-fa1f-42f4-b387-f502263dd86f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000171070}] Aliases:map[]}"
	Oct 03 19:40:54 default-k8s-diff-port-842797 crio[838]: time="2025-10-03T19:40:54.990920749Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 03 19:40:55 default-k8s-diff-port-842797 crio[838]: time="2025-10-03T19:40:55.009116346Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:7812e0f952d337f0e7f64eebe12cca85cba972b5bf2a4435ab8c0801c07474c9 UID:8e5137cd-0a54-45cf-a04a-251fab3a1832 NetNS:/var/run/netns/ea5e64f4-fa1f-42f4-b387-f502263dd86f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000171070}] Aliases:map[]}"
	Oct 03 19:40:55 default-k8s-diff-port-842797 crio[838]: time="2025-10-03T19:40:55.009744762Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 03 19:40:55 default-k8s-diff-port-842797 crio[838]: time="2025-10-03T19:40:55.026997288Z" level=info msg="Ran pod sandbox 7812e0f952d337f0e7f64eebe12cca85cba972b5bf2a4435ab8c0801c07474c9 with infra container: default/busybox/POD" id=86300a8b-0eec-4def-87bc-17748974b892 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 03 19:40:55 default-k8s-diff-port-842797 crio[838]: time="2025-10-03T19:40:55.031792102Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ad8cef63-742d-4cc0-9bc9-8e72247c08d4 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 19:40:55 default-k8s-diff-port-842797 crio[838]: time="2025-10-03T19:40:55.032335643Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=ad8cef63-742d-4cc0-9bc9-8e72247c08d4 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 19:40:55 default-k8s-diff-port-842797 crio[838]: time="2025-10-03T19:40:55.032492569Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=ad8cef63-742d-4cc0-9bc9-8e72247c08d4 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 19:40:55 default-k8s-diff-port-842797 crio[838]: time="2025-10-03T19:40:55.038418732Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=7dd6d013-0b0f-4997-bde8-0092ddf77d5a name=/runtime.v1.ImageService/PullImage
	Oct 03 19:40:55 default-k8s-diff-port-842797 crio[838]: time="2025-10-03T19:40:55.045950237Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 03 19:40:57 default-k8s-diff-port-842797 crio[838]: time="2025-10-03T19:40:57.255787682Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=7dd6d013-0b0f-4997-bde8-0092ddf77d5a name=/runtime.v1.ImageService/PullImage
	Oct 03 19:40:57 default-k8s-diff-port-842797 crio[838]: time="2025-10-03T19:40:57.25668093Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=3a89f5b3-7599-4b60-87c7-a04ce5a85481 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 19:40:57 default-k8s-diff-port-842797 crio[838]: time="2025-10-03T19:40:57.26125833Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=931a3d85-567e-4854-9564-9b08bd0967b7 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 19:40:57 default-k8s-diff-port-842797 crio[838]: time="2025-10-03T19:40:57.273036162Z" level=info msg="Creating container: default/busybox/busybox" id=80aedae6-864e-4fbf-ad0d-86ec0487204b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:40:57 default-k8s-diff-port-842797 crio[838]: time="2025-10-03T19:40:57.274081823Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 19:40:57 default-k8s-diff-port-842797 crio[838]: time="2025-10-03T19:40:57.283663289Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 19:40:57 default-k8s-diff-port-842797 crio[838]: time="2025-10-03T19:40:57.284477562Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 19:40:57 default-k8s-diff-port-842797 crio[838]: time="2025-10-03T19:40:57.299913452Z" level=info msg="Created container d71222581b28f6e0a057c2ecbd47b279bdca8fcce19a1c19580db5dacd5c3c07: default/busybox/busybox" id=80aedae6-864e-4fbf-ad0d-86ec0487204b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:40:57 default-k8s-diff-port-842797 crio[838]: time="2025-10-03T19:40:57.301125278Z" level=info msg="Starting container: d71222581b28f6e0a057c2ecbd47b279bdca8fcce19a1c19580db5dacd5c3c07" id=53810498-e323-422a-8b12-40d6b03ca759 name=/runtime.v1.RuntimeService/StartContainer
	Oct 03 19:40:57 default-k8s-diff-port-842797 crio[838]: time="2025-10-03T19:40:57.306491023Z" level=info msg="Started container" PID=1820 containerID=d71222581b28f6e0a057c2ecbd47b279bdca8fcce19a1c19580db5dacd5c3c07 description=default/busybox/busybox id=53810498-e323-422a-8b12-40d6b03ca759 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7812e0f952d337f0e7f64eebe12cca85cba972b5bf2a4435ab8c0801c07474c9
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	d71222581b28f       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago        Running             busybox                   0                   7812e0f952d33       busybox                                                default
	7ea0b6bea7258       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      12 seconds ago       Running             coredns                   0                   26251ed383176       coredns-66bc5c9577-l8knz                               kube-system
	478e81dfbcf5d       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      12 seconds ago       Running             storage-provisioner       0                   aa244fe036b2c       storage-provisioner                                    kube-system
	89a7a8cdd5e85       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      53 seconds ago       Running             kindnet-cni               0                   37dda02cd5214       kindnet-96q8s                                          kube-system
	49b01b570e6d2       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      53 seconds ago       Running             kube-proxy                0                   aa12d0d5dde1c       kube-proxy-gvslj                                       kube-system
	cdba00ff6901c       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   056a8657bf046       kube-scheduler-default-k8s-diff-port-842797            kube-system
	73ee9a1d9c1a5       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   591e146179a68       kube-controller-manager-default-k8s-diff-port-842797   kube-system
	9646a799550c4       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   c949012a1e761       etcd-default-k8s-diff-port-842797                      kube-system
	ddbc44787c532       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   16a99eedc05f1       kube-apiserver-default-k8s-diff-port-842797            kube-system
	
	
	==> coredns [7ea0b6bea7258e23f5048aa34ecce79bb2ca187c297feffb840b514309773dae] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34945 - 51842 "HINFO IN 2679673086001837735.393987194381740739. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.032841731s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-842797
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-842797
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a43873c79fc22f8b1ccd29d3dfa635d392b09335
	                    minikube.k8s.io/name=default-k8s-diff-port-842797
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_03T19_40_05_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 03 Oct 2025 19:40:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-842797
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 03 Oct 2025 19:40:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 03 Oct 2025 19:40:56 +0000   Fri, 03 Oct 2025 19:39:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 03 Oct 2025 19:40:56 +0000   Fri, 03 Oct 2025 19:39:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 03 Oct 2025 19:40:56 +0000   Fri, 03 Oct 2025 19:39:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 03 Oct 2025 19:40:56 +0000   Fri, 03 Oct 2025 19:40:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-842797
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 178aec6e1f42489bab9e1b42d3e29a8d
	  System UUID:                0315913a-ac76-434b-8962-2420e3ad1d8e
	  Boot ID:                    3762136e-8bec-4104-a5cb-0b1976f6048e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-l8knz                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     54s
	  kube-system                 etcd-default-k8s-diff-port-842797                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         59s
	  kube-system                 kindnet-96q8s                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      54s
	  kube-system                 kube-apiserver-default-k8s-diff-port-842797             250m (12%)    0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-842797    200m (10%)    0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 kube-proxy-gvslj                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kube-system                 kube-scheduler-default-k8s-diff-port-842797             100m (5%)     0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 53s                kube-proxy       
	  Warning  CgroupV1                 71s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  71s (x8 over 71s)  kubelet          Node default-k8s-diff-port-842797 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    71s (x8 over 71s)  kubelet          Node default-k8s-diff-port-842797 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     71s (x8 over 71s)  kubelet          Node default-k8s-diff-port-842797 status is now: NodeHasSufficientPID
	  Normal   Starting                 60s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 60s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  59s                kubelet          Node default-k8s-diff-port-842797 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    59s                kubelet          Node default-k8s-diff-port-842797 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     59s                kubelet          Node default-k8s-diff-port-842797 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           55s                node-controller  Node default-k8s-diff-port-842797 event: Registered Node default-k8s-diff-port-842797 in Controller
	  Normal   NodeReady                13s                kubelet          Node default-k8s-diff-port-842797 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct 3 19:11] overlayfs: idmapped layers are currently not supported
	[  +4.287643] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:12] overlayfs: idmapped layers are currently not supported
	[ +24.839009] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:13] overlayfs: idmapped layers are currently not supported
	[ +26.493253] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:15] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:16] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:17] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000010] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Oct 3 19:18] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:20] overlayfs: idmapped layers are currently not supported
	[ +32.018892] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:22] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:24] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:26] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:32] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:34] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:35] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:36] overlayfs: idmapped layers are currently not supported
	[  +4.740983] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:38] overlayfs: idmapped layers are currently not supported
	[ +12.897300] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:39] overlayfs: idmapped layers are currently not supported
	[  +4.104516] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [9646a799550c480b4ca59d2922e5c74c5de08d4128ffa1f881871bb12f2f738b] <==
	{"level":"warn","ts":"2025-10-03T19:39:57.573929Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:39:57.674409Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:39:57.706139Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:39:57.711066Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:39:57.746680Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:39:57.796954Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:39:57.845718Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:39:57.893480Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:39:57.929750Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:39:57.984549Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:39:58.005144Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:39:58.093492Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:39:58.164405Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:39:58.258769Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:39:58.327041Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:39:58.374167Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:39:58.387608Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:39:58.437877Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:39:58.485132Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:39:58.538303Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:39:58.582021Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:39:58.633694Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:39:58.668806Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:39:58.720400Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:39:59.040955Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56254","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 19:41:04 up  2:23,  0 user,  load average: 3.59, 3.23, 2.42
	Linux default-k8s-diff-port-842797 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [89a7a8cdd5e8580497a165ad26f44894489b2f3ea6da9624523421c9baa31dcd] <==
	I1003 19:40:11.399034       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1003 19:40:11.489768       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1003 19:40:11.489925       1 main.go:148] setting mtu 1500 for CNI 
	I1003 19:40:11.489938       1 main.go:178] kindnetd IP family: "ipv4"
	I1003 19:40:11.489952       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-03T19:40:11Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1003 19:40:11.689546       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1003 19:40:11.689580       1 controller.go:381] "Waiting for informer caches to sync"
	I1003 19:40:11.689589       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1003 19:40:11.690337       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1003 19:40:41.690489       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1003 19:40:41.690605       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1003 19:40:41.690695       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1003 19:40:41.690781       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1003 19:40:43.289765       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1003 19:40:43.289803       1 metrics.go:72] Registering metrics
	I1003 19:40:43.289878       1 controller.go:711] "Syncing nftables rules"
	I1003 19:40:51.690165       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1003 19:40:51.690218       1 main.go:301] handling current node
	I1003 19:41:01.690033       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1003 19:41:01.690136       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ddbc44787c532129261e9ecc5a0a65063af01c8732a2dce1ffa448709a7436f0] <==
	I1003 19:40:01.621061       1 aggregator.go:171] initial CRD sync complete...
	I1003 19:40:01.621095       1 autoregister_controller.go:144] Starting autoregister controller
	I1003 19:40:01.621122       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1003 19:40:01.621149       1 cache.go:39] Caches are synced for autoregister controller
	I1003 19:40:01.651255       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1003 19:40:01.659998       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1003 19:40:01.670685       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1003 19:40:02.067306       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1003 19:40:02.145062       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1003 19:40:02.145153       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1003 19:40:03.541446       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1003 19:40:03.615431       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1003 19:40:03.714870       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1003 19:40:03.731199       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1003 19:40:03.732507       1 controller.go:667] quota admission added evaluator for: endpoints
	I1003 19:40:03.739715       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1003 19:40:04.629540       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1003 19:40:04.691410       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1003 19:40:04.755029       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1003 19:40:04.788200       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1003 19:40:10.326888       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1003 19:40:10.335489       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1003 19:40:10.566282       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1003 19:40:10.723026       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1003 19:41:02.839518       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8444->192.168.76.1:36122: use of closed network connection
	
	
	==> kube-controller-manager [73ee9a1d9c1a510662a062d1436cc0e9b4e53091140c229bf4fb3efba2b5cfd9] <==
	I1003 19:40:09.706389       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1003 19:40:09.711052       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1003 19:40:09.711100       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1003 19:40:09.711113       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1003 19:40:09.711246       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1003 19:40:09.713832       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1003 19:40:09.713944       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1003 19:40:09.714368       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1003 19:40:09.714385       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1003 19:40:09.714392       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1003 19:40:09.714968       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1003 19:40:09.719181       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1003 19:40:09.721078       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1003 19:40:09.724087       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1003 19:40:09.730581       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1003 19:40:09.730860       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1003 19:40:09.731216       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1003 19:40:09.736868       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1003 19:40:09.741788       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1003 19:40:09.744033       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1003 19:40:09.750409       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1003 19:40:09.751579       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1003 19:40:09.752878       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1003 19:40:09.761074       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1003 19:40:54.670649       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [49b01b570e6d27acd14320698e1fe5feea715416035b7faec5e4520391cb2a96] <==
	I1003 19:40:11.445049       1 server_linux.go:53] "Using iptables proxy"
	I1003 19:40:11.520396       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1003 19:40:11.628140       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1003 19:40:11.628179       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1003 19:40:11.628306       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1003 19:40:11.672082       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1003 19:40:11.672147       1 server_linux.go:132] "Using iptables Proxier"
	I1003 19:40:11.676631       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1003 19:40:11.677093       1 server.go:527] "Version info" version="v1.34.1"
	I1003 19:40:11.677304       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1003 19:40:11.678746       1 config.go:200] "Starting service config controller"
	I1003 19:40:11.678818       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1003 19:40:11.678863       1 config.go:106] "Starting endpoint slice config controller"
	I1003 19:40:11.678891       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1003 19:40:11.678929       1 config.go:403] "Starting serviceCIDR config controller"
	I1003 19:40:11.678955       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1003 19:40:11.680974       1 config.go:309] "Starting node config controller"
	I1003 19:40:11.681049       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1003 19:40:11.681082       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1003 19:40:11.779362       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1003 19:40:11.779401       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1003 19:40:11.779428       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [cdba00ff6901cc462d5f31c0f041c21e5a4824a5fa4129a6248dec2291db852b] <==
	I1003 19:40:02.662233       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1003 19:40:02.682066       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1003 19:40:02.682191       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1003 19:40:02.682215       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1003 19:40:02.682232       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1003 19:40:02.720668       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1003 19:40:02.729416       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1003 19:40:02.729475       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1003 19:40:02.729515       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1003 19:40:02.729552       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1003 19:40:02.729599       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1003 19:40:02.729633       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1003 19:40:02.729666       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1003 19:40:02.729730       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1003 19:40:02.729769       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1003 19:40:02.729802       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1003 19:40:02.729841       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1003 19:40:02.729883       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1003 19:40:02.729918       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1003 19:40:02.729956       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1003 19:40:02.730029       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1003 19:40:02.730067       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1003 19:40:02.730113       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1003 19:40:02.730175       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I1003 19:40:03.782606       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 03 19:40:06 default-k8s-diff-port-842797 kubelet[1329]: I1003 19:40:06.457029    1329 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-842797" podStartSLOduration=1.457023051 podStartE2EDuration="1.457023051s" podCreationTimestamp="2025-10-03 19:40:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-03 19:40:06.456566076 +0000 UTC m=+1.855585427" watchObservedRunningTime="2025-10-03 19:40:06.457023051 +0000 UTC m=+1.856042394"
	Oct 03 19:40:09 default-k8s-diff-port-842797 kubelet[1329]: I1003 19:40:09.718263    1329 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 03 19:40:09 default-k8s-diff-port-842797 kubelet[1329]: I1003 19:40:09.720943    1329 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 03 19:40:10 default-k8s-diff-port-842797 kubelet[1329]: I1003 19:40:10.883158    1329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3cfa5fdd-13b6-4c43-aa02-a74c256ceed2-kube-proxy\") pod \"kube-proxy-gvslj\" (UID: \"3cfa5fdd-13b6-4c43-aa02-a74c256ceed2\") " pod="kube-system/kube-proxy-gvslj"
	Oct 03 19:40:10 default-k8s-diff-port-842797 kubelet[1329]: I1003 19:40:10.883757    1329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7vfb\" (UniqueName: \"kubernetes.io/projected/3cfa5fdd-13b6-4c43-aa02-a74c256ceed2-kube-api-access-p7vfb\") pod \"kube-proxy-gvslj\" (UID: \"3cfa5fdd-13b6-4c43-aa02-a74c256ceed2\") " pod="kube-system/kube-proxy-gvslj"
	Oct 03 19:40:10 default-k8s-diff-port-842797 kubelet[1329]: I1003 19:40:10.883909    1329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/ab4664bf-01c0-4b62-9eb8-f65194dff517-cni-cfg\") pod \"kindnet-96q8s\" (UID: \"ab4664bf-01c0-4b62-9eb8-f65194dff517\") " pod="kube-system/kindnet-96q8s"
	Oct 03 19:40:10 default-k8s-diff-port-842797 kubelet[1329]: I1003 19:40:10.884038    1329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8h6x\" (UniqueName: \"kubernetes.io/projected/ab4664bf-01c0-4b62-9eb8-f65194dff517-kube-api-access-g8h6x\") pod \"kindnet-96q8s\" (UID: \"ab4664bf-01c0-4b62-9eb8-f65194dff517\") " pod="kube-system/kindnet-96q8s"
	Oct 03 19:40:10 default-k8s-diff-port-842797 kubelet[1329]: I1003 19:40:10.884210    1329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3cfa5fdd-13b6-4c43-aa02-a74c256ceed2-xtables-lock\") pod \"kube-proxy-gvslj\" (UID: \"3cfa5fdd-13b6-4c43-aa02-a74c256ceed2\") " pod="kube-system/kube-proxy-gvslj"
	Oct 03 19:40:10 default-k8s-diff-port-842797 kubelet[1329]: I1003 19:40:10.884342    1329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ab4664bf-01c0-4b62-9eb8-f65194dff517-xtables-lock\") pod \"kindnet-96q8s\" (UID: \"ab4664bf-01c0-4b62-9eb8-f65194dff517\") " pod="kube-system/kindnet-96q8s"
	Oct 03 19:40:10 default-k8s-diff-port-842797 kubelet[1329]: I1003 19:40:10.884466    1329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3cfa5fdd-13b6-4c43-aa02-a74c256ceed2-lib-modules\") pod \"kube-proxy-gvslj\" (UID: \"3cfa5fdd-13b6-4c43-aa02-a74c256ceed2\") " pod="kube-system/kube-proxy-gvslj"
	Oct 03 19:40:10 default-k8s-diff-port-842797 kubelet[1329]: I1003 19:40:10.884617    1329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ab4664bf-01c0-4b62-9eb8-f65194dff517-lib-modules\") pod \"kindnet-96q8s\" (UID: \"ab4664bf-01c0-4b62-9eb8-f65194dff517\") " pod="kube-system/kindnet-96q8s"
	Oct 03 19:40:11 default-k8s-diff-port-842797 kubelet[1329]: I1003 19:40:11.029971    1329 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 03 19:40:11 default-k8s-diff-port-842797 kubelet[1329]: W1003 19:40:11.217145    1329 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/dd1cbce823c3c68d280f6d6431457674ab5e928f19effd4b41908fc33cc07deb/crio-37dda02cd5214a906c83c698d4e078309c73dd1cede3ff784944f215f30255df WatchSource:0}: Error finding container 37dda02cd5214a906c83c698d4e078309c73dd1cede3ff784944f215f30255df: Status 404 returned error can't find the container with id 37dda02cd5214a906c83c698d4e078309c73dd1cede3ff784944f215f30255df
	Oct 03 19:40:12 default-k8s-diff-port-842797 kubelet[1329]: I1003 19:40:12.343596    1329 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-96q8s" podStartSLOduration=2.34357357 podStartE2EDuration="2.34357357s" podCreationTimestamp="2025-10-03 19:40:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-03 19:40:12.319895757 +0000 UTC m=+7.718915108" watchObservedRunningTime="2025-10-03 19:40:12.34357357 +0000 UTC m=+7.742592929"
	Oct 03 19:40:14 default-k8s-diff-port-842797 kubelet[1329]: I1003 19:40:14.253663    1329 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gvslj" podStartSLOduration=4.2536436460000004 podStartE2EDuration="4.253643646s" podCreationTimestamp="2025-10-03 19:40:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-03 19:40:12.345289723 +0000 UTC m=+7.744309066" watchObservedRunningTime="2025-10-03 19:40:14.253643646 +0000 UTC m=+9.652662997"
	Oct 03 19:40:51 default-k8s-diff-port-842797 kubelet[1329]: I1003 19:40:51.837375    1329 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 03 19:40:52 default-k8s-diff-port-842797 kubelet[1329]: I1003 19:40:52.004979    1329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20442eef-faaa-4dfb-bd27-e8f4fda45d0e-config-volume\") pod \"coredns-66bc5c9577-l8knz\" (UID: \"20442eef-faaa-4dfb-bd27-e8f4fda45d0e\") " pod="kube-system/coredns-66bc5c9577-l8knz"
	Oct 03 19:40:52 default-k8s-diff-port-842797 kubelet[1329]: I1003 19:40:52.005353    1329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56hvr\" (UniqueName: \"kubernetes.io/projected/20442eef-faaa-4dfb-bd27-e8f4fda45d0e-kube-api-access-56hvr\") pod \"coredns-66bc5c9577-l8knz\" (UID: \"20442eef-faaa-4dfb-bd27-e8f4fda45d0e\") " pod="kube-system/coredns-66bc5c9577-l8knz"
	Oct 03 19:40:52 default-k8s-diff-port-842797 kubelet[1329]: I1003 19:40:52.005468    1329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46hb6\" (UniqueName: \"kubernetes.io/projected/e700db76-d3d4-422f-8069-cb3a0b9ebe86-kube-api-access-46hb6\") pod \"storage-provisioner\" (UID: \"e700db76-d3d4-422f-8069-cb3a0b9ebe86\") " pod="kube-system/storage-provisioner"
	Oct 03 19:40:52 default-k8s-diff-port-842797 kubelet[1329]: I1003 19:40:52.005592    1329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/e700db76-d3d4-422f-8069-cb3a0b9ebe86-tmp\") pod \"storage-provisioner\" (UID: \"e700db76-d3d4-422f-8069-cb3a0b9ebe86\") " pod="kube-system/storage-provisioner"
	Oct 03 19:40:52 default-k8s-diff-port-842797 kubelet[1329]: W1003 19:40:52.199903    1329 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/dd1cbce823c3c68d280f6d6431457674ab5e928f19effd4b41908fc33cc07deb/crio-aa244fe036b2c8aa3840f1ffccae0235ebf89ab2f11bb95612b508192e62e6df WatchSource:0}: Error finding container aa244fe036b2c8aa3840f1ffccae0235ebf89ab2f11bb95612b508192e62e6df: Status 404 returned error can't find the container with id aa244fe036b2c8aa3840f1ffccae0235ebf89ab2f11bb95612b508192e62e6df
	Oct 03 19:40:52 default-k8s-diff-port-842797 kubelet[1329]: W1003 19:40:52.211864    1329 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/dd1cbce823c3c68d280f6d6431457674ab5e928f19effd4b41908fc33cc07deb/crio-26251ed383176058e11a4207eaf6c8a5d9b49162f7ab8b4de24bad03a462fe3b WatchSource:0}: Error finding container 26251ed383176058e11a4207eaf6c8a5d9b49162f7ab8b4de24bad03a462fe3b: Status 404 returned error can't find the container with id 26251ed383176058e11a4207eaf6c8a5d9b49162f7ab8b4de24bad03a462fe3b
	Oct 03 19:40:52 default-k8s-diff-port-842797 kubelet[1329]: I1003 19:40:52.427192    1329 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=40.427172149 podStartE2EDuration="40.427172149s" podCreationTimestamp="2025-10-03 19:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-03 19:40:52.410832745 +0000 UTC m=+47.809852104" watchObservedRunningTime="2025-10-03 19:40:52.427172149 +0000 UTC m=+47.826191492"
	Oct 03 19:40:54 default-k8s-diff-port-842797 kubelet[1329]: I1003 19:40:54.668826    1329 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-l8knz" podStartSLOduration=44.668800307 podStartE2EDuration="44.668800307s" podCreationTimestamp="2025-10-03 19:40:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-03 19:40:52.428197584 +0000 UTC m=+47.827216943" watchObservedRunningTime="2025-10-03 19:40:54.668800307 +0000 UTC m=+50.067819666"
	Oct 03 19:40:54 default-k8s-diff-port-842797 kubelet[1329]: I1003 19:40:54.829590    1329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krsqv\" (UniqueName: \"kubernetes.io/projected/8e5137cd-0a54-45cf-a04a-251fab3a1832-kube-api-access-krsqv\") pod \"busybox\" (UID: \"8e5137cd-0a54-45cf-a04a-251fab3a1832\") " pod="default/busybox"
	
	
	==> storage-provisioner [478e81dfbcf5d7f60e5878fb15c091fe756c1d635acbcea6d8dca7202899f43c] <==
	I1003 19:40:52.256456       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1003 19:40:52.287598       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1003 19:40:52.287862       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1003 19:40:52.290359       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:40:52.297241       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1003 19:40:52.298520       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cf2e8791-f5d6-4403-8f58-225b6bccc9d1", APIVersion:"v1", ResourceVersion:"459", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-842797_c7e432a4-3eeb-465c-ad14-753ee38cb624 became leader
	I1003 19:40:52.299843       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1003 19:40:52.299991       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-842797_c7e432a4-3eeb-465c-ad14-753ee38cb624!
	W1003 19:40:52.334389       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:40:52.337825       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1003 19:40:52.406025       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-842797_c7e432a4-3eeb-465c-ad14-753ee38cb624!
	W1003 19:40:54.341260       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:40:54.356901       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:40:56.361558       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:40:56.368140       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:40:58.371593       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:40:58.376588       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:41:00.380792       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:41:00.387271       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:41:02.391424       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:41:02.399944       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:41:04.403865       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:41:04.415100       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-842797 -n default-k8s-diff-port-842797
E1003 19:41:05.576063  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/old-k8s-version-174543/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-842797 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (3.03s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (3.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-277907 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-277907 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (350.900255ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-03T19:41:48Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-277907 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-277907
helpers_test.go:243: (dbg) docker inspect newest-cni-277907:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8b59090431046f9d951b48ace59a9091019f835007d577cd4555f6908daa6561",
	        "Created": "2025-10-03T19:41:08.107758945Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 491387,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-03T19:41:08.174630826Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/8b59090431046f9d951b48ace59a9091019f835007d577cd4555f6908daa6561/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8b59090431046f9d951b48ace59a9091019f835007d577cd4555f6908daa6561/hostname",
	        "HostsPath": "/var/lib/docker/containers/8b59090431046f9d951b48ace59a9091019f835007d577cd4555f6908daa6561/hosts",
	        "LogPath": "/var/lib/docker/containers/8b59090431046f9d951b48ace59a9091019f835007d577cd4555f6908daa6561/8b59090431046f9d951b48ace59a9091019f835007d577cd4555f6908daa6561-json.log",
	        "Name": "/newest-cni-277907",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-277907:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-277907",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8b59090431046f9d951b48ace59a9091019f835007d577cd4555f6908daa6561",
	                "LowerDir": "/var/lib/docker/overlay2/b6fb5b9dd131113b1ef3ef7a8465607ff85135a48ebecb8c77db75dd388bdc0a-init/diff:/var/lib/docker/overlay2/87b205803817b0b71a214d995ab7e10a92033bbf72d76d6e052f1d21ccecb313/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b6fb5b9dd131113b1ef3ef7a8465607ff85135a48ebecb8c77db75dd388bdc0a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b6fb5b9dd131113b1ef3ef7a8465607ff85135a48ebecb8c77db75dd388bdc0a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b6fb5b9dd131113b1ef3ef7a8465607ff85135a48ebecb8c77db75dd388bdc0a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-277907",
	                "Source": "/var/lib/docker/volumes/newest-cni-277907/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-277907",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-277907",
	                "name.minikube.sigs.k8s.io": "newest-cni-277907",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "30bde1ad8f32ca41d6a0d32d52e22f40d3a561a45bed77fce21ca3b156feefad",
	            "SandboxKey": "/var/run/docker/netns/30bde1ad8f32",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33453"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33454"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33457"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33455"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33456"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-277907": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "12:26:f7:29:ec:37",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2fb89ec9f4b0d949e37566d491ff7c9e7ec5488e3271757158a55861f4d56349",
	                    "EndpointID": "51b5223701c95f9e2b22af19d78481e84eb009205c47c5f4973aaf1ea6d6de27",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-277907",
	                        "8b5909043104"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-277907 -n newest-cni-277907
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-277907 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-277907 logs -n 25: (1.469309347s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p old-k8s-version-174543                                                                                                                                                                                                                     │ old-k8s-version-174543       │ jenkins │ v1.37.0 │ 03 Oct 25 19:37 UTC │ 03 Oct 25 19:37 UTC │
	│ delete  │ -p old-k8s-version-174543                                                                                                                                                                                                                     │ old-k8s-version-174543       │ jenkins │ v1.37.0 │ 03 Oct 25 19:37 UTC │ 03 Oct 25 19:37 UTC │
	│ start   │ -p embed-certs-327416 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-327416           │ jenkins │ v1.37.0 │ 03 Oct 25 19:37 UTC │ 03 Oct 25 19:39 UTC │
	│ addons  │ enable dashboard -p no-preload-643397 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-643397            │ jenkins │ v1.37.0 │ 03 Oct 25 19:38 UTC │ 03 Oct 25 19:38 UTC │
	│ start   │ -p no-preload-643397 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-643397            │ jenkins │ v1.37.0 │ 03 Oct 25 19:38 UTC │ 03 Oct 25 19:39 UTC │
	│ image   │ no-preload-643397 image list --format=json                                                                                                                                                                                                    │ no-preload-643397            │ jenkins │ v1.37.0 │ 03 Oct 25 19:39 UTC │ 03 Oct 25 19:39 UTC │
	│ pause   │ -p no-preload-643397 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-643397            │ jenkins │ v1.37.0 │ 03 Oct 25 19:39 UTC │                     │
	│ delete  │ -p no-preload-643397                                                                                                                                                                                                                          │ no-preload-643397            │ jenkins │ v1.37.0 │ 03 Oct 25 19:39 UTC │ 03 Oct 25 19:39 UTC │
	│ delete  │ -p no-preload-643397                                                                                                                                                                                                                          │ no-preload-643397            │ jenkins │ v1.37.0 │ 03 Oct 25 19:39 UTC │ 03 Oct 25 19:39 UTC │
	│ delete  │ -p disable-driver-mounts-839513                                                                                                                                                                                                               │ disable-driver-mounts-839513 │ jenkins │ v1.37.0 │ 03 Oct 25 19:39 UTC │ 03 Oct 25 19:39 UTC │
	│ start   │ -p default-k8s-diff-port-842797 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-842797 │ jenkins │ v1.37.0 │ 03 Oct 25 19:39 UTC │ 03 Oct 25 19:40 UTC │
	│ addons  │ enable metrics-server -p embed-certs-327416 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-327416           │ jenkins │ v1.37.0 │ 03 Oct 25 19:39 UTC │                     │
	│ stop    │ -p embed-certs-327416 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-327416           │ jenkins │ v1.37.0 │ 03 Oct 25 19:39 UTC │ 03 Oct 25 19:39 UTC │
	│ addons  │ enable dashboard -p embed-certs-327416 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-327416           │ jenkins │ v1.37.0 │ 03 Oct 25 19:39 UTC │ 03 Oct 25 19:39 UTC │
	│ start   │ -p embed-certs-327416 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-327416           │ jenkins │ v1.37.0 │ 03 Oct 25 19:39 UTC │ 03 Oct 25 19:40 UTC │
	│ image   │ embed-certs-327416 image list --format=json                                                                                                                                                                                                   │ embed-certs-327416           │ jenkins │ v1.37.0 │ 03 Oct 25 19:40 UTC │ 03 Oct 25 19:40 UTC │
	│ pause   │ -p embed-certs-327416 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-327416           │ jenkins │ v1.37.0 │ 03 Oct 25 19:40 UTC │                     │
	│ delete  │ -p embed-certs-327416                                                                                                                                                                                                                         │ embed-certs-327416           │ jenkins │ v1.37.0 │ 03 Oct 25 19:40 UTC │ 03 Oct 25 19:41 UTC │
	│ delete  │ -p embed-certs-327416                                                                                                                                                                                                                         │ embed-certs-327416           │ jenkins │ v1.37.0 │ 03 Oct 25 19:41 UTC │ 03 Oct 25 19:41 UTC │
	│ start   │ -p newest-cni-277907 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-277907            │ jenkins │ v1.37.0 │ 03 Oct 25 19:41 UTC │ 03 Oct 25 19:41 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-842797 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-842797 │ jenkins │ v1.37.0 │ 03 Oct 25 19:41 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-842797 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-842797 │ jenkins │ v1.37.0 │ 03 Oct 25 19:41 UTC │ 03 Oct 25 19:41 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-842797 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-842797 │ jenkins │ v1.37.0 │ 03 Oct 25 19:41 UTC │ 03 Oct 25 19:41 UTC │
	│ start   │ -p default-k8s-diff-port-842797 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-842797 │ jenkins │ v1.37.0 │ 03 Oct 25 19:41 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-277907 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-277907            │ jenkins │ v1.37.0 │ 03 Oct 25 19:41 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/03 19:41:18
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 19:41:18.245316  492927 out.go:360] Setting OutFile to fd 1 ...
	I1003 19:41:18.245857  492927 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 19:41:18.245889  492927 out.go:374] Setting ErrFile to fd 2...
	I1003 19:41:18.245910  492927 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 19:41:18.246202  492927 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-284583/.minikube/bin
	I1003 19:41:18.246626  492927 out.go:368] Setting JSON to false
	I1003 19:41:18.247622  492927 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8630,"bootTime":1759511849,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1003 19:41:18.247722  492927 start.go:140] virtualization:  
	I1003 19:41:18.250971  492927 out.go:179] * [default-k8s-diff-port-842797] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1003 19:41:18.255049  492927 out.go:179]   - MINIKUBE_LOCATION=21625
	I1003 19:41:18.255122  492927 notify.go:220] Checking for updates...
	I1003 19:41:18.258895  492927 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 19:41:18.261788  492927 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21625-284583/kubeconfig
	I1003 19:41:18.264701  492927 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-284583/.minikube
	I1003 19:41:18.267712  492927 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1003 19:41:18.270918  492927 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 19:41:18.274360  492927 config.go:182] Loaded profile config "default-k8s-diff-port-842797": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 19:41:18.275013  492927 driver.go:421] Setting default libvirt URI to qemu:///system
	I1003 19:41:18.306422  492927 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1003 19:41:18.306538  492927 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 19:41:18.401423  492927 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-03 19:41:18.391504113 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1003 19:41:18.401519  492927 docker.go:318] overlay module found
	I1003 19:41:18.404600  492927 out.go:179] * Using the docker driver based on existing profile
	I1003 19:41:18.407204  492927 start.go:304] selected driver: docker
	I1003 19:41:18.407223  492927 start.go:924] validating driver "docker" against &{Name:default-k8s-diff-port-842797 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-842797 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 19:41:18.407313  492927 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 19:41:18.408046  492927 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 19:41:18.504915  492927 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-03 19:41:18.48959129 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1003 19:41:18.505307  492927 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 19:41:18.505331  492927 cni.go:84] Creating CNI manager for ""
	I1003 19:41:18.505389  492927 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 19:41:18.505427  492927 start.go:348] cluster config:
	{Name:default-k8s-diff-port-842797 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-842797 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 19:41:18.508456  492927 out.go:179] * Starting "default-k8s-diff-port-842797" primary control-plane node in "default-k8s-diff-port-842797" cluster
	I1003 19:41:18.511233  492927 cache.go:123] Beginning downloading kic base image for docker with crio
	I1003 19:41:18.514100  492927 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1003 19:41:18.517041  492927 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 19:41:18.517104  492927 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21625-284583/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1003 19:41:18.517115  492927 cache.go:58] Caching tarball of preloaded images
	I1003 19:41:18.517224  492927 preload.go:233] Found /home/jenkins/minikube-integration/21625-284583/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1003 19:41:18.517234  492927 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1003 19:41:18.517344  492927 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/default-k8s-diff-port-842797/config.json ...
	I1003 19:41:18.517583  492927 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1003 19:41:18.541650  492927 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1003 19:41:18.541670  492927 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1003 19:41:18.541683  492927 cache.go:232] Successfully downloaded all kic artifacts
	I1003 19:41:18.541706  492927 start.go:360] acquireMachinesLock for default-k8s-diff-port-842797: {Name:mk20e38240481d350e4d3a0db3a5de4e7cd2a493 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 19:41:18.541758  492927 start.go:364] duration metric: took 34.314µs to acquireMachinesLock for "default-k8s-diff-port-842797"
	I1003 19:41:18.541779  492927 start.go:96] Skipping create...Using existing machine configuration
	I1003 19:41:18.541784  492927 fix.go:54] fixHost starting: 
	I1003 19:41:18.542062  492927 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-842797 --format={{.State.Status}}
	I1003 19:41:18.560829  492927 fix.go:112] recreateIfNeeded on default-k8s-diff-port-842797: state=Stopped err=<nil>
	W1003 19:41:18.560856  492927 fix.go:138] unexpected machine state, will restart: <nil>
	I1003 19:41:17.720889  490346 out.go:252]   - Generating certificates and keys ...
	I1003 19:41:17.720986  490346 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1003 19:41:17.721062  490346 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1003 19:41:18.148236  490346 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1003 19:41:18.709183  490346 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1003 19:41:18.935493  490346 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1003 19:41:19.792807  490346 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1003 19:41:20.405648  490346 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1003 19:41:20.406246  490346 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-277907] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1003 19:41:20.801672  490346 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1003 19:41:20.802025  490346 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-277907] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1003 19:41:20.962971  490346 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1003 19:41:18.563933  492927 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-842797" ...
	I1003 19:41:18.564011  492927 cli_runner.go:164] Run: docker start default-k8s-diff-port-842797
	I1003 19:41:18.955270  492927 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-842797 --format={{.State.Status}}
	I1003 19:41:18.984483  492927 kic.go:430] container "default-k8s-diff-port-842797" state is running.
	I1003 19:41:18.984915  492927 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-842797
	I1003 19:41:19.008343  492927 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/default-k8s-diff-port-842797/config.json ...
	I1003 19:41:19.008595  492927 machine.go:93] provisionDockerMachine start ...
	I1003 19:41:19.008676  492927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-842797
	I1003 19:41:19.034145  492927 main.go:141] libmachine: Using SSH client type: native
	I1003 19:41:19.034470  492927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33458 <nil> <nil>}
	I1003 19:41:19.034488  492927 main.go:141] libmachine: About to run SSH command:
	hostname
	I1003 19:41:19.035972  492927 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1003 19:41:22.181048  492927 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-842797
	
	I1003 19:41:22.181099  492927 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-842797"
	I1003 19:41:22.181178  492927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-842797
	I1003 19:41:22.203516  492927 main.go:141] libmachine: Using SSH client type: native
	I1003 19:41:22.203833  492927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33458 <nil> <nil>}
	I1003 19:41:22.203851  492927 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-842797 && echo "default-k8s-diff-port-842797" | sudo tee /etc/hostname
	I1003 19:41:22.363792  492927 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-842797
	
	I1003 19:41:22.363871  492927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-842797
	I1003 19:41:22.386282  492927 main.go:141] libmachine: Using SSH client type: native
	I1003 19:41:22.386600  492927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33458 <nil> <nil>}
	I1003 19:41:22.386625  492927 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-842797' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-842797/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-842797' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 19:41:22.522371  492927 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 19:41:22.522480  492927 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21625-284583/.minikube CaCertPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21625-284583/.minikube}
	I1003 19:41:22.522557  492927 ubuntu.go:190] setting up certificates
	I1003 19:41:22.522596  492927 provision.go:84] configureAuth start
	I1003 19:41:22.522701  492927 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-842797
	I1003 19:41:22.550618  492927 provision.go:143] copyHostCerts
	I1003 19:41:22.550718  492927 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-284583/.minikube/key.pem, removing ...
	I1003 19:41:22.550736  492927 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-284583/.minikube/key.pem
	I1003 19:41:22.550828  492927 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21625-284583/.minikube/key.pem (1675 bytes)
	I1003 19:41:22.550959  492927 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-284583/.minikube/ca.pem, removing ...
	I1003 19:41:22.550969  492927 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-284583/.minikube/ca.pem
	I1003 19:41:22.551018  492927 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21625-284583/.minikube/ca.pem (1082 bytes)
	I1003 19:41:22.551131  492927 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-284583/.minikube/cert.pem, removing ...
	I1003 19:41:22.551136  492927 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-284583/.minikube/cert.pem
	I1003 19:41:22.551163  492927 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21625-284583/.minikube/cert.pem (1123 bytes)
	I1003 19:41:22.551230  492927 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21625-284583/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-842797 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-842797 localhost minikube]
	I1003 19:41:22.762189  492927 provision.go:177] copyRemoteCerts
	I1003 19:41:22.762259  492927 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 19:41:22.762307  492927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-842797
	I1003 19:41:22.782570  492927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/default-k8s-diff-port-842797/id_rsa Username:docker}
	I1003 19:41:22.883025  492927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 19:41:22.908260  492927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1003 19:41:22.933972  492927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1003 19:41:22.959329  492927 provision.go:87] duration metric: took 436.684394ms to configureAuth
	I1003 19:41:22.959365  492927 ubuntu.go:206] setting minikube options for container-runtime
	I1003 19:41:22.959649  492927 config.go:182] Loaded profile config "default-k8s-diff-port-842797": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 19:41:22.959805  492927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-842797
	I1003 19:41:22.992212  492927 main.go:141] libmachine: Using SSH client type: native
	I1003 19:41:22.992577  492927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33458 <nil> <nil>}
	I1003 19:41:22.992604  492927 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1003 19:41:23.373819  492927 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1003 19:41:23.373847  492927 machine.go:96] duration metric: took 4.365233407s to provisionDockerMachine
	I1003 19:41:23.373859  492927 start.go:293] postStartSetup for "default-k8s-diff-port-842797" (driver="docker")
	I1003 19:41:23.373894  492927 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 19:41:23.373982  492927 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 19:41:23.374032  492927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-842797
	I1003 19:41:23.407064  492927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/default-k8s-diff-port-842797/id_rsa Username:docker}
	I1003 19:41:23.509695  492927 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 19:41:23.513759  492927 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1003 19:41:23.513783  492927 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1003 19:41:23.513794  492927 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-284583/.minikube/addons for local assets ...
	I1003 19:41:23.513845  492927 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-284583/.minikube/files for local assets ...
	I1003 19:41:23.513927  492927 filesync.go:149] local asset: /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem -> 2864342.pem in /etc/ssl/certs
	I1003 19:41:23.514033  492927 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 19:41:23.522590  492927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem --> /etc/ssl/certs/2864342.pem (1708 bytes)
	I1003 19:41:23.542357  492927 start.go:296] duration metric: took 168.482612ms for postStartSetup
	I1003 19:41:23.542480  492927 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 19:41:23.542572  492927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-842797
	I1003 19:41:23.563225  492927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/default-k8s-diff-port-842797/id_rsa Username:docker}
	I1003 19:41:23.658983  492927 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1003 19:41:23.666316  492927 fix.go:56] duration metric: took 5.124524949s for fixHost
	I1003 19:41:23.666338  492927 start.go:83] releasing machines lock for "default-k8s-diff-port-842797", held for 5.124571062s
	I1003 19:41:23.666416  492927 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-842797
	I1003 19:41:23.700965  492927 ssh_runner.go:195] Run: cat /version.json
	I1003 19:41:23.701016  492927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-842797
	I1003 19:41:23.701265  492927 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 19:41:23.701318  492927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-842797
	I1003 19:41:23.738014  492927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/default-k8s-diff-port-842797/id_rsa Username:docker}
	I1003 19:41:23.745575  492927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/default-k8s-diff-port-842797/id_rsa Username:docker}
	I1003 19:41:23.862042  492927 ssh_runner.go:195] Run: systemctl --version
	I1003 19:41:23.957644  492927 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1003 19:41:24.008117  492927 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1003 19:41:24.013660  492927 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 19:41:24.013727  492927 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 19:41:24.024589  492927 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1003 19:41:24.024616  492927 start.go:495] detecting cgroup driver to use...
	I1003 19:41:24.024648  492927 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1003 19:41:24.024698  492927 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 19:41:24.041650  492927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 19:41:24.056364  492927 docker.go:218] disabling cri-docker service (if available) ...
	I1003 19:41:24.056438  492927 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1003 19:41:24.073455  492927 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1003 19:41:24.088107  492927 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1003 19:41:24.231116  492927 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1003 19:41:24.398243  492927 docker.go:234] disabling docker service ...
	I1003 19:41:24.398323  492927 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1003 19:41:24.426829  492927 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1003 19:41:24.441940  492927 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1003 19:41:24.577392  492927 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1003 19:41:24.724931  492927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1003 19:41:24.739134  492927 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 19:41:24.754447  492927 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1003 19:41:24.754553  492927 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:41:24.763850  492927 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1003 19:41:24.763966  492927 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:41:24.773034  492927 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:41:24.782028  492927 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:41:24.791453  492927 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 19:41:24.800045  492927 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:41:24.809695  492927 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:41:24.818447  492927 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:41:24.827656  492927 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 19:41:24.836101  492927 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 19:41:24.844187  492927 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 19:41:24.978560  492927 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1003 19:41:25.135212  492927 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1003 19:41:25.135331  492927 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1003 19:41:25.140676  492927 start.go:563] Will wait 60s for crictl version
	I1003 19:41:25.140803  492927 ssh_runner.go:195] Run: which crictl
	I1003 19:41:25.149684  492927 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1003 19:41:25.187601  492927 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1003 19:41:25.187761  492927 ssh_runner.go:195] Run: crio --version
	I1003 19:41:25.220764  492927 ssh_runner.go:195] Run: crio --version
	I1003 19:41:25.256939  492927 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1003 19:41:22.410968  490346 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1003 19:41:23.038815  490346 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1003 19:41:23.039174  490346 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1003 19:41:24.555277  490346 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1003 19:41:24.732485  490346 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1003 19:41:25.346950  490346 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1003 19:41:25.714076  490346 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1003 19:41:26.731630  490346 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1003 19:41:26.731752  490346 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1003 19:41:26.733106  490346 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1003 19:41:26.738724  490346 out.go:252]   - Booting up control plane ...
	I1003 19:41:26.738854  490346 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1003 19:41:26.738941  490346 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1003 19:41:26.739337  490346 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1003 19:41:26.789521  490346 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1003 19:41:26.791556  490346 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1003 19:41:26.801331  490346 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1003 19:41:26.801458  490346 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1003 19:41:26.801505  490346 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1003 19:41:27.011257  490346 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1003 19:41:27.011389  490346 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1003 19:41:25.259982  492927 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-842797 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 19:41:25.277771  492927 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1003 19:41:25.282357  492927 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 19:41:25.291935  492927 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-842797 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-842797 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1003 19:41:25.292046  492927 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 19:41:25.292107  492927 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 19:41:25.332875  492927 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 19:41:25.332953  492927 crio.go:433] Images already preloaded, skipping extraction
	I1003 19:41:25.333042  492927 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 19:41:25.361739  492927 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 19:41:25.361760  492927 cache_images.go:85] Images are preloaded, skipping loading
	I1003 19:41:25.361768  492927 kubeadm.go:934] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1003 19:41:25.361864  492927 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-842797 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-842797 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1003 19:41:25.361955  492927 ssh_runner.go:195] Run: crio config
	I1003 19:41:25.448375  492927 cni.go:84] Creating CNI manager for ""
	I1003 19:41:25.448410  492927 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 19:41:25.448428  492927 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1003 19:41:25.448457  492927 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-842797 NodeName:default-k8s-diff-port-842797 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1003 19:41:25.448615  492927 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-842797"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 19:41:25.448699  492927 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1003 19:41:25.456834  492927 binaries.go:44] Found k8s binaries, skipping transfer
	I1003 19:41:25.456920  492927 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1003 19:41:25.464344  492927 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1003 19:41:25.477428  492927 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 19:41:25.491205  492927 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1003 19:41:25.515875  492927 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1003 19:41:25.523044  492927 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 19:41:25.532956  492927 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 19:41:25.684002  492927 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 19:41:25.702350  492927 certs.go:69] Setting up /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/default-k8s-diff-port-842797 for IP: 192.168.76.2
	I1003 19:41:25.702416  492927 certs.go:195] generating shared ca certs ...
	I1003 19:41:25.702447  492927 certs.go:227] acquiring lock for ca certs: {Name:mk5a10e6c921326e9c211447576eaeb893259ba7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:41:25.702615  492927 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21625-284583/.minikube/ca.key
	I1003 19:41:25.702725  492927 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.key
	I1003 19:41:25.702753  492927 certs.go:257] generating profile certs ...
	I1003 19:41:25.702874  492927 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/default-k8s-diff-port-842797/client.key
	I1003 19:41:25.702996  492927 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/default-k8s-diff-port-842797/apiserver.key.1fd7b568
	I1003 19:41:25.703065  492927 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/default-k8s-diff-port-842797/proxy-client.key
	I1003 19:41:25.703204  492927 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/286434.pem (1338 bytes)
	W1003 19:41:25.703266  492927 certs.go:480] ignoring /home/jenkins/minikube-integration/21625-284583/.minikube/certs/286434_empty.pem, impossibly tiny 0 bytes
	I1003 19:41:25.703290  492927 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 19:41:25.703345  492927 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem (1082 bytes)
	I1003 19:41:25.703397  492927 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem (1123 bytes)
	I1003 19:41:25.703450  492927 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/key.pem (1675 bytes)
	I1003 19:41:25.703533  492927 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem (1708 bytes)
	I1003 19:41:25.704178  492927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 19:41:25.777366  492927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1003 19:41:25.823133  492927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 19:41:25.858838  492927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1003 19:41:25.913391  492927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/default-k8s-diff-port-842797/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1003 19:41:25.976038  492927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/default-k8s-diff-port-842797/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1003 19:41:26.030999  492927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/default-k8s-diff-port-842797/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 19:41:26.062005  492927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/default-k8s-diff-port-842797/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1003 19:41:26.081618  492927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem --> /usr/share/ca-certificates/2864342.pem (1708 bytes)
	I1003 19:41:26.105574  492927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 19:41:26.132638  492927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/certs/286434.pem --> /usr/share/ca-certificates/286434.pem (1338 bytes)
	I1003 19:41:26.150195  492927 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 19:41:26.171351  492927 ssh_runner.go:195] Run: openssl version
	I1003 19:41:26.178258  492927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2864342.pem && ln -fs /usr/share/ca-certificates/2864342.pem /etc/ssl/certs/2864342.pem"
	I1003 19:41:26.190728  492927 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2864342.pem
	I1003 19:41:26.195503  492927 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  3 18:34 /usr/share/ca-certificates/2864342.pem
	I1003 19:41:26.195644  492927 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2864342.pem
	I1003 19:41:26.243168  492927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2864342.pem /etc/ssl/certs/3ec20f2e.0"
	I1003 19:41:26.253479  492927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 19:41:26.264117  492927 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 19:41:26.269739  492927 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  3 18:27 /usr/share/ca-certificates/minikubeCA.pem
	I1003 19:41:26.269822  492927 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 19:41:26.317649  492927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 19:41:26.328014  492927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/286434.pem && ln -fs /usr/share/ca-certificates/286434.pem /etc/ssl/certs/286434.pem"
	I1003 19:41:26.339556  492927 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/286434.pem
	I1003 19:41:26.344271  492927 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  3 18:34 /usr/share/ca-certificates/286434.pem
	I1003 19:41:26.344352  492927 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/286434.pem
	I1003 19:41:26.391977  492927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/286434.pem /etc/ssl/certs/51391683.0"
	I1003 19:41:26.402215  492927 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 19:41:26.407233  492927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1003 19:41:26.455054  492927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1003 19:41:26.528267  492927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1003 19:41:26.637074  492927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1003 19:41:26.733803  492927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1003 19:41:26.865283  492927 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1003 19:41:27.039516  492927 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-842797 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-842797 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 19:41:27.039673  492927 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1003 19:41:27.039775  492927 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1003 19:41:27.111846  492927 cri.go:89] found id: "02535cb7690885e90adcc200c551315486edf2d6f1bb2cbd015e185c373fe0c2"
	I1003 19:41:27.111871  492927 cri.go:89] found id: "72a3c6c093ee7526caa8d968d0ef1b63f258556b89c398a06f6b15295b410635"
	I1003 19:41:27.111885  492927 cri.go:89] found id: "95f720e182dbb5dbc9ca0b55d30ef0869679c1087e3e87174822cffb7d42a5ea"
	I1003 19:41:27.111898  492927 cri.go:89] found id: "a6485da9cdb1c66096d6663ef94b1c675b5cc8904328eba3b2537fa5c260cdba"
	I1003 19:41:27.111903  492927 cri.go:89] found id: ""
	I1003 19:41:27.111955  492927 ssh_runner.go:195] Run: sudo runc list -f json
	W1003 19:41:27.134697  492927 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-03T19:41:27Z" level=error msg="open /run/runc: no such file or directory"
	I1003 19:41:27.134788  492927 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 19:41:27.149171  492927 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1003 19:41:27.149243  492927 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1003 19:41:27.149342  492927 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1003 19:41:27.162633  492927 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1003 19:41:27.163124  492927 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-842797" does not appear in /home/jenkins/minikube-integration/21625-284583/kubeconfig
	I1003 19:41:27.163298  492927 kubeconfig.go:62] /home/jenkins/minikube-integration/21625-284583/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-842797" cluster setting kubeconfig missing "default-k8s-diff-port-842797" context setting]
	I1003 19:41:27.163671  492927 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/kubeconfig: {Name:mkc1323fd87f4a78231a26d2dab0dff7feecf1e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:41:27.165372  492927 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1003 19:41:27.182719  492927 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1003 19:41:27.182796  492927 kubeadm.go:601] duration metric: took 33.53223ms to restartPrimaryControlPlane
	I1003 19:41:27.182825  492927 kubeadm.go:402] duration metric: took 143.318809ms to StartCluster
	I1003 19:41:27.182874  492927 settings.go:142] acquiring lock: {Name:mkc95577dbc448e3409dfa2b5e53a3a1327cb451 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:41:27.182985  492927 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21625-284583/kubeconfig
	I1003 19:41:27.183683  492927 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/kubeconfig: {Name:mkc1323fd87f4a78231a26d2dab0dff7feecf1e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:41:27.183964  492927 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1003 19:41:27.184396  492927 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1003 19:41:27.184467  492927 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-842797"
	I1003 19:41:27.184482  492927 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-842797"
	W1003 19:41:27.184489  492927 addons.go:247] addon storage-provisioner should already be in state true
	I1003 19:41:27.184512  492927 host.go:66] Checking if "default-k8s-diff-port-842797" exists ...
	I1003 19:41:27.185211  492927 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-842797 --format={{.State.Status}}
	I1003 19:41:27.185506  492927 config.go:182] Loaded profile config "default-k8s-diff-port-842797": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 19:41:27.185588  492927 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-842797"
	I1003 19:41:27.185629  492927 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-842797"
	W1003 19:41:27.185653  492927 addons.go:247] addon dashboard should already be in state true
	I1003 19:41:27.185691  492927 host.go:66] Checking if "default-k8s-diff-port-842797" exists ...
	I1003 19:41:27.186131  492927 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-842797 --format={{.State.Status}}
	I1003 19:41:27.186537  492927 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-842797"
	I1003 19:41:27.186563  492927 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-842797"
	I1003 19:41:27.186864  492927 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-842797 --format={{.State.Status}}
	I1003 19:41:27.196886  492927 out.go:179] * Verifying Kubernetes components...
	I1003 19:41:27.200284  492927 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 19:41:27.250825  492927 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 19:41:27.250917  492927 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1003 19:41:27.255412  492927 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-842797"
	W1003 19:41:27.255435  492927 addons.go:247] addon default-storageclass should already be in state true
	I1003 19:41:27.255461  492927 host.go:66] Checking if "default-k8s-diff-port-842797" exists ...
	I1003 19:41:27.255725  492927 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 19:41:27.255750  492927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1003 19:41:27.255815  492927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-842797
	I1003 19:41:27.255989  492927 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-842797 --format={{.State.Status}}
	I1003 19:41:27.259471  492927 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1003 19:41:27.263975  492927 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1003 19:41:27.264003  492927 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1003 19:41:27.264082  492927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-842797
	I1003 19:41:27.314703  492927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/default-k8s-diff-port-842797/id_rsa Username:docker}
	I1003 19:41:27.324992  492927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/default-k8s-diff-port-842797/id_rsa Username:docker}
	I1003 19:41:27.326036  492927 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1003 19:41:27.326051  492927 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1003 19:41:27.326106  492927 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-842797
	I1003 19:41:27.356852  492927 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/default-k8s-diff-port-842797/id_rsa Username:docker}
	I1003 19:41:27.604573  492927 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1003 19:41:27.604640  492927 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1003 19:41:27.667604  492927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1003 19:41:27.748166  492927 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 19:41:27.765149  492927 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1003 19:41:27.765217  492927 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1003 19:41:27.866070  492927 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1003 19:41:27.866142  492927 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1003 19:41:27.867110  492927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 19:41:27.948392  492927 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1003 19:41:27.948466  492927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1003 19:41:28.016141  492927 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1003 19:41:28.016167  492927 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1003 19:41:28.073067  492927 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1003 19:41:28.073093  492927 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1003 19:41:28.149126  492927 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1003 19:41:28.149151  492927 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1003 19:41:28.208524  492927 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1003 19:41:28.208556  492927 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1003 19:41:28.231959  492927 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1003 19:41:28.231986  492927 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1003 19:41:28.014964  490346 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.002917228s
	I1003 19:41:28.035578  490346 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1003 19:41:28.035693  490346 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1003 19:41:28.035809  490346 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1003 19:41:28.035898  490346 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1003 19:41:28.250802  492927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1003 19:41:34.774610  492927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.106919723s)
	I1003 19:41:34.775024  492927 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (7.026768651s)
	I1003 19:41:34.775059  492927 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-842797" to be "Ready" ...
	I1003 19:41:34.895138  492927 node_ready.go:49] node "default-k8s-diff-port-842797" is "Ready"
	I1003 19:41:34.895163  492927 node_ready.go:38] duration metric: took 120.094055ms for node "default-k8s-diff-port-842797" to be "Ready" ...
	I1003 19:41:34.895176  492927 api_server.go:52] waiting for apiserver process to appear ...
	I1003 19:41:34.895233  492927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 19:41:37.483237  492927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.616064169s)
	I1003 19:41:37.541504  492927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (9.290653797s)
	I1003 19:41:37.541680  492927 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.64643655s)
	I1003 19:41:37.541704  492927 api_server.go:72] duration metric: took 10.357688347s to wait for apiserver process to appear ...
	I1003 19:41:37.541716  492927 api_server.go:88] waiting for apiserver healthz status ...
	I1003 19:41:37.541734  492927 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1003 19:41:37.544644  492927 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-842797 addons enable metrics-server
	
	I1003 19:41:37.547662  492927 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1003 19:41:37.551282  492927 addons.go:514] duration metric: took 10.36687222s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1003 19:41:37.552866  492927 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1003 19:41:37.554090  492927 api_server.go:141] control plane version: v1.34.1
	I1003 19:41:37.554115  492927 api_server.go:131] duration metric: took 12.391151ms to wait for apiserver health ...
	I1003 19:41:37.554124  492927 system_pods.go:43] waiting for kube-system pods to appear ...
	I1003 19:41:37.561762  492927 system_pods.go:59] 8 kube-system pods found
	I1003 19:41:37.561811  492927 system_pods.go:61] "coredns-66bc5c9577-l8knz" [20442eef-faaa-4dfb-bd27-e8f4fda45d0e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1003 19:41:37.561847  492927 system_pods.go:61] "etcd-default-k8s-diff-port-842797" [8db70af0-84e1-42e2-8676-3db2f2732f13] Running
	I1003 19:41:37.561854  492927 system_pods.go:61] "kindnet-96q8s" [ab4664bf-01c0-4b62-9eb8-f65194dff517] Running
	I1003 19:41:37.561861  492927 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-842797" [c7b2a799-b6f6-4be1-a67c-d603d2a8cd7e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1003 19:41:37.561866  492927 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-842797" [44ec1bf9-f1e3-4342-bd43-2202ff291aeb] Running
	I1003 19:41:37.561878  492927 system_pods.go:61] "kube-proxy-gvslj" [3cfa5fdd-13b6-4c43-aa02-a74c256ceed2] Running
	I1003 19:41:37.561883  492927 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-842797" [6aba1d05-eec7-4030-b4ee-2b39cd76ec2a] Running
	I1003 19:41:37.561887  492927 system_pods.go:61] "storage-provisioner" [e700db76-d3d4-422f-8069-cb3a0b9ebe86] Running
	I1003 19:41:37.561899  492927 system_pods.go:74] duration metric: took 7.76694ms to wait for pod list to return data ...
	I1003 19:41:37.561926  492927 default_sa.go:34] waiting for default service account to be created ...
	I1003 19:41:37.565721  492927 default_sa.go:45] found service account: "default"
	I1003 19:41:37.565792  492927 default_sa.go:55] duration metric: took 3.828556ms for default service account to be created ...
	I1003 19:41:37.565810  492927 system_pods.go:116] waiting for k8s-apps to be running ...
	I1003 19:41:37.570256  492927 system_pods.go:86] 8 kube-system pods found
	I1003 19:41:37.570289  492927 system_pods.go:89] "coredns-66bc5c9577-l8knz" [20442eef-faaa-4dfb-bd27-e8f4fda45d0e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1003 19:41:37.570332  492927 system_pods.go:89] "etcd-default-k8s-diff-port-842797" [8db70af0-84e1-42e2-8676-3db2f2732f13] Running
	I1003 19:41:37.570339  492927 system_pods.go:89] "kindnet-96q8s" [ab4664bf-01c0-4b62-9eb8-f65194dff517] Running
	I1003 19:41:37.570354  492927 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-842797" [c7b2a799-b6f6-4be1-a67c-d603d2a8cd7e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1003 19:41:37.570360  492927 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-842797" [44ec1bf9-f1e3-4342-bd43-2202ff291aeb] Running
	I1003 19:41:37.570369  492927 system_pods.go:89] "kube-proxy-gvslj" [3cfa5fdd-13b6-4c43-aa02-a74c256ceed2] Running
	I1003 19:41:37.570392  492927 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-842797" [6aba1d05-eec7-4030-b4ee-2b39cd76ec2a] Running
	I1003 19:41:37.570401  492927 system_pods.go:89] "storage-provisioner" [e700db76-d3d4-422f-8069-cb3a0b9ebe86] Running
	I1003 19:41:37.570419  492927 system_pods.go:126] duration metric: took 4.591218ms to wait for k8s-apps to be running ...
	I1003 19:41:37.570434  492927 system_svc.go:44] waiting for kubelet service to be running ....
	I1003 19:41:37.570516  492927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 19:41:37.599990  492927 system_svc.go:56] duration metric: took 29.546085ms WaitForService to wait for kubelet
	I1003 19:41:37.600018  492927 kubeadm.go:586] duration metric: took 10.416000501s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 19:41:37.600038  492927 node_conditions.go:102] verifying NodePressure condition ...
	I1003 19:41:37.603611  492927 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1003 19:41:37.603645  492927 node_conditions.go:123] node cpu capacity is 2
	I1003 19:41:37.603658  492927 node_conditions.go:105] duration metric: took 3.614571ms to run NodePressure ...
	I1003 19:41:37.603671  492927 start.go:241] waiting for startup goroutines ...
	I1003 19:41:37.603683  492927 start.go:246] waiting for cluster config update ...
	I1003 19:41:37.603694  492927 start.go:255] writing updated cluster config ...
	I1003 19:41:37.603988  492927 ssh_runner.go:195] Run: rm -f paused
	I1003 19:41:37.608088  492927 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1003 19:41:37.659869  492927 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-l8knz" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:41:37.109562  490346 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 9.072518487s
	I1003 19:41:37.983700  490346 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 9.938894377s
	I1003 19:41:39.538532  490346 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 11.502094571s
	I1003 19:41:39.561910  490346 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1003 19:41:39.580859  490346 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1003 19:41:39.595952  490346 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1003 19:41:39.596343  490346 kubeadm.go:318] [mark-control-plane] Marking the node newest-cni-277907 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1003 19:41:39.612511  490346 kubeadm.go:318] [bootstrap-token] Using token: yop29i.ekjdwam33eoj6xi1
	I1003 19:41:39.615494  490346 out.go:252]   - Configuring RBAC rules ...
	I1003 19:41:39.615635  490346 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1003 19:41:39.631986  490346 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1003 19:41:39.648390  490346 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1003 19:41:39.653997  490346 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1003 19:41:39.664764  490346 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1003 19:41:39.675999  490346 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1003 19:41:39.946948  490346 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1003 19:41:40.396962  490346 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1003 19:41:40.947416  490346 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1003 19:41:40.948393  490346 kubeadm.go:318] 
	I1003 19:41:40.948475  490346 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1003 19:41:40.948481  490346 kubeadm.go:318] 
	I1003 19:41:40.948562  490346 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1003 19:41:40.948567  490346 kubeadm.go:318] 
	I1003 19:41:40.948593  490346 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1003 19:41:40.948660  490346 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1003 19:41:40.948714  490346 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1003 19:41:40.948719  490346 kubeadm.go:318] 
	I1003 19:41:40.948815  490346 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1003 19:41:40.948822  490346 kubeadm.go:318] 
	I1003 19:41:40.948871  490346 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1003 19:41:40.948876  490346 kubeadm.go:318] 
	I1003 19:41:40.948929  490346 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1003 19:41:40.949007  490346 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1003 19:41:40.949083  490346 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1003 19:41:40.949089  490346 kubeadm.go:318] 
	I1003 19:41:40.949176  490346 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1003 19:41:40.949256  490346 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1003 19:41:40.949261  490346 kubeadm.go:318] 
	I1003 19:41:40.949347  490346 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token yop29i.ekjdwam33eoj6xi1 \
	I1003 19:41:40.949455  490346 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:f66ff31263aa4cda6b17caa2076838d6a1918275f1c2773b90b119c0d4a4d71a \
	I1003 19:41:40.949476  490346 kubeadm.go:318] 	--control-plane 
	I1003 19:41:40.949480  490346 kubeadm.go:318] 
	I1003 19:41:40.949573  490346 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1003 19:41:40.949580  490346 kubeadm.go:318] 
	I1003 19:41:40.949665  490346 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token yop29i.ekjdwam33eoj6xi1 \
	I1003 19:41:40.950123  490346 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:f66ff31263aa4cda6b17caa2076838d6a1918275f1c2773b90b119c0d4a4d71a 
	I1003 19:41:40.954944  490346 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1003 19:41:40.955174  490346 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1003 19:41:40.955282  490346 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1003 19:41:40.955302  490346 cni.go:84] Creating CNI manager for ""
	I1003 19:41:40.955310  490346 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 19:41:40.960411  490346 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1003 19:41:40.963374  490346 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1003 19:41:40.968690  490346 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1003 19:41:40.968714  490346 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1003 19:41:40.998452  490346 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1003 19:41:41.496098  490346 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1003 19:41:41.496299  490346 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 19:41:41.496437  490346 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-277907 minikube.k8s.io/updated_at=2025_10_03T19_41_41_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=a43873c79fc22f8b1ccd29d3dfa635d392b09335 minikube.k8s.io/name=newest-cni-277907 minikube.k8s.io/primary=true
	I1003 19:41:41.908015  490346 ops.go:34] apiserver oom_adj: -16
	I1003 19:41:41.908281  490346 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1003 19:41:39.666056  492927 pod_ready.go:104] pod "coredns-66bc5c9577-l8knz" is not "Ready", error: <nil>
	W1003 19:41:41.669121  492927 pod_ready.go:104] pod "coredns-66bc5c9577-l8knz" is not "Ready", error: <nil>
	I1003 19:41:42.408356  490346 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 19:41:42.909356  490346 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 19:41:43.408347  490346 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 19:41:43.908754  490346 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 19:41:44.408302  490346 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 19:41:44.908285  490346 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 19:41:45.408352  490346 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 19:41:45.908329  490346 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1003 19:41:46.055059  490346 kubeadm.go:1113] duration metric: took 4.558804827s to wait for elevateKubeSystemPrivileges
	I1003 19:41:46.055085  490346 kubeadm.go:402] duration metric: took 28.645119987s to StartCluster
	I1003 19:41:46.055113  490346 settings.go:142] acquiring lock: {Name:mkc95577dbc448e3409dfa2b5e53a3a1327cb451 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:41:46.055173  490346 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21625-284583/kubeconfig
	I1003 19:41:46.056264  490346 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/kubeconfig: {Name:mkc1323fd87f4a78231a26d2dab0dff7feecf1e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:41:46.056479  490346 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1003 19:41:46.056749  490346 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1003 19:41:46.056897  490346 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1003 19:41:46.056979  490346 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-277907"
	I1003 19:41:46.056999  490346 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-277907"
	I1003 19:41:46.057028  490346 host.go:66] Checking if "newest-cni-277907" exists ...
	I1003 19:41:46.057319  490346 config.go:182] Loaded profile config "newest-cni-277907": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 19:41:46.057413  490346 addons.go:69] Setting default-storageclass=true in profile "newest-cni-277907"
	I1003 19:41:46.057458  490346 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-277907"
	I1003 19:41:46.057536  490346 cli_runner.go:164] Run: docker container inspect newest-cni-277907 --format={{.State.Status}}
	I1003 19:41:46.057997  490346 cli_runner.go:164] Run: docker container inspect newest-cni-277907 --format={{.State.Status}}
	I1003 19:41:46.061501  490346 out.go:179] * Verifying Kubernetes components...
	I1003 19:41:46.065930  490346 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 19:41:46.114759  490346 addons.go:238] Setting addon default-storageclass=true in "newest-cni-277907"
	I1003 19:41:46.114798  490346 host.go:66] Checking if "newest-cni-277907" exists ...
	I1003 19:41:46.115309  490346 cli_runner.go:164] Run: docker container inspect newest-cni-277907 --format={{.State.Status}}
	I1003 19:41:46.115812  490346 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1003 19:41:46.120160  490346 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 19:41:46.120189  490346 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1003 19:41:46.120271  490346 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-277907
	I1003 19:41:46.164611  490346 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1003 19:41:46.164636  490346 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1003 19:41:46.164699  490346 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-277907
	I1003 19:41:46.176323  490346 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/newest-cni-277907/id_rsa Username:docker}
	I1003 19:41:46.197550  490346 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/newest-cni-277907/id_rsa Username:docker}
	I1003 19:41:46.718861  490346 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 19:41:46.740091  490346 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1003 19:41:46.740267  490346 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 19:41:46.759988  490346 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1003 19:41:47.881428  490346 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.162486178s)
	I1003 19:41:47.881538  490346 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.141252665s)
	I1003 19:41:47.881647  490346 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.141522578s)
	I1003 19:41:47.881688  490346 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1003 19:41:47.882502  490346 api_server.go:52] waiting for apiserver process to appear ...
	I1003 19:41:47.882589  490346 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 19:41:47.882724  490346 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.122709821s)
	I1003 19:41:47.930836  490346 api_server.go:72] duration metric: took 1.874327817s to wait for apiserver process to appear ...
	I1003 19:41:47.930862  490346 api_server.go:88] waiting for apiserver healthz status ...
	I1003 19:41:47.930881  490346 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1003 19:41:47.959592  490346 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1003 19:41:44.166618  492927 pod_ready.go:104] pod "coredns-66bc5c9577-l8knz" is not "Ready", error: <nil>
	W1003 19:41:46.181392  492927 pod_ready.go:104] pod "coredns-66bc5c9577-l8knz" is not "Ready", error: <nil>
	I1003 19:41:47.963437  490346 addons.go:514] duration metric: took 1.906536451s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1003 19:41:47.964525  490346 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1003 19:41:47.969449  490346 api_server.go:141] control plane version: v1.34.1
	I1003 19:41:47.969480  490346 api_server.go:131] duration metric: took 38.610087ms to wait for apiserver health ...
	I1003 19:41:47.969489  490346 system_pods.go:43] waiting for kube-system pods to appear ...
	I1003 19:41:47.988618  490346 system_pods.go:59] 9 kube-system pods found
	I1003 19:41:47.988760  490346 system_pods.go:61] "coredns-66bc5c9577-qvbbr" [1cd277df-18e2-4280-aed7-5f55acbafa2e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1003 19:41:47.988797  490346 system_pods.go:61] "coredns-66bc5c9577-sqss5" [28e16136-f534-48a3-8b69-2cd91ee1b70b] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1003 19:41:47.988834  490346 system_pods.go:61] "etcd-newest-cni-277907" [9a388045-313d-4a5e-a56a-c070a23d10f0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1003 19:41:47.988854  490346 system_pods.go:61] "kindnet-b6wxk" [efbd6505-dbd9-4229-9f30-5de99ce9258e] Running
	I1003 19:41:47.988874  490346 system_pods.go:61] "kube-apiserver-newest-cni-277907" [e333974e-7706-4dd3-a108-96d50d755815] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1003 19:41:47.988913  490346 system_pods.go:61] "kube-controller-manager-newest-cni-277907" [ca367ef6-21e7-49f2-bb9e-a73465e96941] Running
	I1003 19:41:47.988932  490346 system_pods.go:61] "kube-proxy-2ss46" [3e843f2f-9e62-4da8-a413-b23a4e8c33ef] Running
	I1003 19:41:47.988954  490346 system_pods.go:61] "kube-scheduler-newest-cni-277907" [7d578ea2-dbb0-4886-96d7-ed212ff4907a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1003 19:41:47.988999  490346 system_pods.go:61] "storage-provisioner" [da0d0bff-83e0-4502-b45b-5becfa549ef9] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1003 19:41:47.989030  490346 system_pods.go:74] duration metric: took 19.52545ms to wait for pod list to return data ...
	I1003 19:41:47.989054  490346 default_sa.go:34] waiting for default service account to be created ...
	I1003 19:41:48.008298  490346 default_sa.go:45] found service account: "default"
	I1003 19:41:48.008343  490346 default_sa.go:55] duration metric: took 19.24213ms for default service account to be created ...
	I1003 19:41:48.008395  490346 kubeadm.go:586] duration metric: took 1.951889274s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1003 19:41:48.008422  490346 node_conditions.go:102] verifying NodePressure condition ...
	I1003 19:41:48.059277  490346 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1003 19:41:48.059312  490346 node_conditions.go:123] node cpu capacity is 2
	I1003 19:41:48.059325  490346 node_conditions.go:105] duration metric: took 50.897172ms to run NodePressure ...
	I1003 19:41:48.059397  490346 start.go:241] waiting for startup goroutines ...
	I1003 19:41:48.385388  490346 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-277907" context rescaled to 1 replicas
	I1003 19:41:48.385424  490346 start.go:246] waiting for cluster config update ...
	I1003 19:41:48.385466  490346 start.go:255] writing updated cluster config ...
	I1003 19:41:48.385768  490346 ssh_runner.go:195] Run: rm -f paused
	I1003 19:41:48.493169  490346 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1003 19:41:48.496636  490346 out.go:179] * Done! kubectl is now configured to use "newest-cni-277907" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 03 19:41:46 newest-cni-277907 crio[836]: time="2025-10-03T19:41:46.111655875Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 19:41:46 newest-cni-277907 crio[836]: time="2025-10-03T19:41:46.142044476Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-2ss46/POD" id=da0875ee-116f-4845-afd4-51c85acbfbff name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 03 19:41:46 newest-cni-277907 crio[836]: time="2025-10-03T19:41:46.142285858Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 19:41:46 newest-cni-277907 crio[836]: time="2025-10-03T19:41:46.142675624Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=d7e713d5-023c-48b0-8db4-3102e81c21b7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 03 19:41:46 newest-cni-277907 crio[836]: time="2025-10-03T19:41:46.15648544Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=da0875ee-116f-4845-afd4-51c85acbfbff name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 03 19:41:46 newest-cni-277907 crio[836]: time="2025-10-03T19:41:46.170045668Z" level=info msg="Ran pod sandbox d0b7c76486754a977b080f6cbe2fc6aa0a56b0b5f05ffdb1275723760708f29e with infra container: kube-system/kindnet-b6wxk/POD" id=d7e713d5-023c-48b0-8db4-3102e81c21b7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 03 19:41:46 newest-cni-277907 crio[836]: time="2025-10-03T19:41:46.187873064Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=6309bf22-aedd-4363-9c75-f92cb3f75b72 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 19:41:46 newest-cni-277907 crio[836]: time="2025-10-03T19:41:46.225203Z" level=info msg="Ran pod sandbox 367a1751c2079c94cbfd55429b7fe85f8fae690755dd9b050d1ee7aac4c67f6d with infra container: kube-system/kube-proxy-2ss46/POD" id=da0875ee-116f-4845-afd4-51c85acbfbff name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 03 19:41:46 newest-cni-277907 crio[836]: time="2025-10-03T19:41:46.225527321Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=9ae00caa-5cb1-4485-9bc9-15b619a17739 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 19:41:46 newest-cni-277907 crio[836]: time="2025-10-03T19:41:46.254965048Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=16eb7d0e-8cb6-4081-bb73-5acf6546a8dd name=/runtime.v1.ImageService/ImageStatus
	Oct 03 19:41:46 newest-cni-277907 crio[836]: time="2025-10-03T19:41:46.260160625Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=64f16c05-a58f-432f-85c7-7aa5436226de name=/runtime.v1.ImageService/ImageStatus
	Oct 03 19:41:46 newest-cni-277907 crio[836]: time="2025-10-03T19:41:46.266365325Z" level=info msg="Creating container: kube-system/kindnet-b6wxk/kindnet-cni" id=45df18f0-eb1c-4f00-9c72-4afcdb09eb91 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:41:46 newest-cni-277907 crio[836]: time="2025-10-03T19:41:46.276598551Z" level=info msg="Creating container: kube-system/kube-proxy-2ss46/kube-proxy" id=38a2162c-4692-49ae-aba3-fc873c88ccec name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:41:46 newest-cni-277907 crio[836]: time="2025-10-03T19:41:46.285070954Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 19:41:46 newest-cni-277907 crio[836]: time="2025-10-03T19:41:46.285438049Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 19:41:46 newest-cni-277907 crio[836]: time="2025-10-03T19:41:46.317033315Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 19:41:46 newest-cni-277907 crio[836]: time="2025-10-03T19:41:46.317805553Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 19:41:46 newest-cni-277907 crio[836]: time="2025-10-03T19:41:46.318242926Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 19:41:46 newest-cni-277907 crio[836]: time="2025-10-03T19:41:46.323732546Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 19:41:46 newest-cni-277907 crio[836]: time="2025-10-03T19:41:46.364613053Z" level=info msg="Created container 905d2c3a1bb767ab4d056acbd3016a732685d2b0513e9072034cc37c7ed0de19: kube-system/kube-proxy-2ss46/kube-proxy" id=38a2162c-4692-49ae-aba3-fc873c88ccec name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:41:46 newest-cni-277907 crio[836]: time="2025-10-03T19:41:46.368097144Z" level=info msg="Starting container: 905d2c3a1bb767ab4d056acbd3016a732685d2b0513e9072034cc37c7ed0de19" id=c8a928be-a004-40f7-aa5e-4bf6e5383ca2 name=/runtime.v1.RuntimeService/StartContainer
	Oct 03 19:41:46 newest-cni-277907 crio[836]: time="2025-10-03T19:41:46.375299644Z" level=info msg="Started container" PID=1448 containerID=905d2c3a1bb767ab4d056acbd3016a732685d2b0513e9072034cc37c7ed0de19 description=kube-system/kube-proxy-2ss46/kube-proxy id=c8a928be-a004-40f7-aa5e-4bf6e5383ca2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=367a1751c2079c94cbfd55429b7fe85f8fae690755dd9b050d1ee7aac4c67f6d
	Oct 03 19:41:46 newest-cni-277907 crio[836]: time="2025-10-03T19:41:46.456405533Z" level=info msg="Created container 06803893ae40647902a607a53cdade20a6e5b96035684f498e25a905748295a5: kube-system/kindnet-b6wxk/kindnet-cni" id=45df18f0-eb1c-4f00-9c72-4afcdb09eb91 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:41:46 newest-cni-277907 crio[836]: time="2025-10-03T19:41:46.460179885Z" level=info msg="Starting container: 06803893ae40647902a607a53cdade20a6e5b96035684f498e25a905748295a5" id=18ffb9de-1f9a-4696-a088-cf605198a4a7 name=/runtime.v1.RuntimeService/StartContainer
	Oct 03 19:41:46 newest-cni-277907 crio[836]: time="2025-10-03T19:41:46.509419776Z" level=info msg="Started container" PID=1453 containerID=06803893ae40647902a607a53cdade20a6e5b96035684f498e25a905748295a5 description=kube-system/kindnet-b6wxk/kindnet-cni id=18ffb9de-1f9a-4696-a088-cf605198a4a7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d0b7c76486754a977b080f6cbe2fc6aa0a56b0b5f05ffdb1275723760708f29e
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	905d2c3a1bb76       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   3 seconds ago       Running             kube-proxy                0                   367a1751c2079       kube-proxy-2ss46                            kube-system
	06803893ae406       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   3 seconds ago       Running             kindnet-cni               0                   d0b7c76486754       kindnet-b6wxk                               kube-system
	4457f483e5e1c       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   21 seconds ago      Running             kube-controller-manager   0                   3b16987c70c72       kube-controller-manager-newest-cni-277907   kube-system
	cd97772b1864e       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   21 seconds ago      Running             kube-apiserver            0                   e11c1020c0176       kube-apiserver-newest-cni-277907            kube-system
	cbfccaee14ce5       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   21 seconds ago      Running             etcd                      0                   857d5b4e811f4       etcd-newest-cni-277907                      kube-system
	6352e06e25b8e       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   21 seconds ago      Running             kube-scheduler            0                   8066511fec9f6       kube-scheduler-newest-cni-277907            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-277907
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-277907
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a43873c79fc22f8b1ccd29d3dfa635d392b09335
	                    minikube.k8s.io/name=newest-cni-277907
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_03T19_41_41_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 03 Oct 2025 19:41:37 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-277907
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 03 Oct 2025 19:41:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 03 Oct 2025 19:41:40 +0000   Fri, 03 Oct 2025 19:41:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 03 Oct 2025 19:41:40 +0000   Fri, 03 Oct 2025 19:41:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 03 Oct 2025 19:41:40 +0000   Fri, 03 Oct 2025 19:41:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 03 Oct 2025 19:41:40 +0000   Fri, 03 Oct 2025 19:41:29 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-277907
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 a095a97c67034323ae41da2c5f078230
	  System UUID:                20e576e4-dd3f-4016-9b52-c906c3cc7f99
	  Boot ID:                    3762136e-8bec-4104-a5cb-0b1976f6048e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-277907                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         10s
	  kube-system                 kindnet-b6wxk                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      5s
	  kube-system                 kube-apiserver-newest-cni-277907             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 kube-controller-manager-newest-cni-277907    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 kube-proxy-2ss46                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5s
	  kube-system                 kube-scheduler-newest-cni-277907             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 3s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  22s (x8 over 22s)  kubelet          Node newest-cni-277907 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    22s (x8 over 22s)  kubelet          Node newest-cni-277907 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     22s (x8 over 22s)  kubelet          Node newest-cni-277907 status is now: NodeHasSufficientPID
	  Normal   Starting                 10s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 10s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  10s                kubelet          Node newest-cni-277907 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10s                kubelet          Node newest-cni-277907 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10s                kubelet          Node newest-cni-277907 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6s                 node-controller  Node newest-cni-277907 event: Registered Node newest-cni-277907 in Controller
	
	
	==> dmesg <==
	[Oct 3 19:12] overlayfs: idmapped layers are currently not supported
	[ +24.839009] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:13] overlayfs: idmapped layers are currently not supported
	[ +26.493253] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:15] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:16] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:17] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000010] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Oct 3 19:18] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:20] overlayfs: idmapped layers are currently not supported
	[ +32.018892] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:22] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:24] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:26] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:32] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:34] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:35] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:36] overlayfs: idmapped layers are currently not supported
	[  +4.740983] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:38] overlayfs: idmapped layers are currently not supported
	[ +12.897300] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:39] overlayfs: idmapped layers are currently not supported
	[  +4.104516] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:41] overlayfs: idmapped layers are currently not supported
	[  +1.990678] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [cbfccaee14ce5db82cfe117a37c96ebaaad074d83aa11e047a1b89ae20fab70d] <==
	{"level":"warn","ts":"2025-10-03T19:41:33.933173Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:41:34.000886Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:41:34.051695Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:41:34.106088Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:41:34.153626Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:41:34.203859Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:41:34.251317Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:41:34.307806Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:41:34.357812Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:41:34.424468Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:41:34.518623Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:41:34.597388Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:41:34.685327Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:41:34.756452Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:41:34.798851Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:41:34.832647Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:41:34.911129Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:41:34.953609Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:41:35.038747Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:41:35.238110Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:41:35.286337Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:41:35.305446Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:41:35.374195Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:41:35.457653Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:41:35.711692Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41680","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 19:41:50 up  2:24,  0 user,  load average: 6.27, 3.90, 2.67
	Linux newest-cni-277907 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [06803893ae40647902a607a53cdade20a6e5b96035684f498e25a905748295a5] <==
	I1003 19:41:46.634798       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1003 19:41:46.642552       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1003 19:41:46.642695       1 main.go:148] setting mtu 1500 for CNI 
	I1003 19:41:46.642709       1 main.go:178] kindnetd IP family: "ipv4"
	I1003 19:41:46.642720       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-03T19:41:46Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1003 19:41:46.839462       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1003 19:41:46.839484       1 controller.go:381] "Waiting for informer caches to sync"
	I1003 19:41:46.839492       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1003 19:41:46.839849       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [cd97772b1864ef3808947a0d294c6d30782c612ca94e91ab9cb10a393f451966] <==
	I1003 19:41:37.874403       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1003 19:41:37.874430       1 cache.go:39] Caches are synced for autoregister controller
	I1003 19:41:37.874606       1 controller.go:667] quota admission added evaluator for: namespaces
	I1003 19:41:37.917192       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1003 19:41:38.019885       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1003 19:41:38.025247       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1003 19:41:38.077257       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1003 19:41:38.077746       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1003 19:41:38.451540       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1003 19:41:38.461011       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1003 19:41:38.461105       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1003 19:41:39.350160       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1003 19:41:39.408891       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1003 19:41:39.572834       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1003 19:41:39.584778       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1003 19:41:39.586138       1 controller.go:667] quota admission added evaluator for: endpoints
	I1003 19:41:39.598276       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1003 19:41:39.652141       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1003 19:41:40.362913       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1003 19:41:40.393548       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1003 19:41:40.432205       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1003 19:41:45.429798       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1003 19:41:45.439584       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1003 19:41:45.701098       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1003 19:41:45.755376       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [4457f483e5e1c1cd7f0098816c22fa5458cc1aeb45e5e3c82487670283b5190d] <==
	I1003 19:41:44.783013       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1003 19:41:44.783497       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1003 19:41:44.783595       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1003 19:41:44.744319       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1003 19:41:44.777596       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1003 19:41:44.788803       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1003 19:41:44.799436       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1003 19:41:44.707016       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1003 19:41:44.744709       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1003 19:41:44.747005       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1003 19:41:44.747694       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1003 19:41:44.779313       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="newest-cni-277907" podCIDRs=["10.42.0.0/24"]
	I1003 19:41:44.787026       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1003 19:41:44.806824       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1003 19:41:44.808344       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1003 19:41:44.808934       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1003 19:41:44.808999       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1003 19:41:44.809038       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1003 19:41:44.809895       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1003 19:41:44.823051       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1003 19:41:44.843112       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1003 19:41:44.843368       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1003 19:41:44.843541       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-277907"
	I1003 19:41:44.843665       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1003 19:41:44.846719       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [905d2c3a1bb767ab4d056acbd3016a732685d2b0513e9072034cc37c7ed0de19] <==
	I1003 19:41:46.533591       1 server_linux.go:53] "Using iptables proxy"
	I1003 19:41:46.770682       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1003 19:41:46.871606       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1003 19:41:46.871647       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1003 19:41:46.871751       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1003 19:41:46.927400       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1003 19:41:46.927453       1 server_linux.go:132] "Using iptables Proxier"
	I1003 19:41:47.007925       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1003 19:41:47.008291       1 server.go:527] "Version info" version="v1.34.1"
	I1003 19:41:47.008316       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1003 19:41:47.022849       1 config.go:200] "Starting service config controller"
	I1003 19:41:47.022876       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1003 19:41:47.022909       1 config.go:106] "Starting endpoint slice config controller"
	I1003 19:41:47.022914       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1003 19:41:47.022925       1 config.go:403] "Starting serviceCIDR config controller"
	I1003 19:41:47.022929       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1003 19:41:47.024702       1 config.go:309] "Starting node config controller"
	I1003 19:41:47.024716       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1003 19:41:47.024810       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1003 19:41:47.132288       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1003 19:41:47.132323       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1003 19:41:47.124145       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [6352e06e25b8eab99e2c14131c798382f07742492548b2f7f17466d0f5bd2cdb] <==
	I1003 19:41:37.952910       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1003 19:41:37.973862       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1003 19:41:37.980156       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1003 19:41:38.002155       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1003 19:41:38.002358       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1003 19:41:38.002495       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1003 19:41:38.002673       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1003 19:41:38.002787       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1003 19:41:38.002898       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1003 19:41:38.003019       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1003 19:41:38.003126       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1003 19:41:38.003236       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1003 19:41:38.003363       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1003 19:41:38.003486       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1003 19:41:38.003636       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1003 19:41:38.003807       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1003 19:41:38.004006       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1003 19:41:38.004190       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1003 19:41:38.004294       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1003 19:41:38.004362       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1003 19:41:38.834066       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1003 19:41:38.864607       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1003 19:41:38.940628       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1003 19:41:39.059848       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1003 19:41:41.152319       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 03 19:41:40 newest-cni-277907 kubelet[1312]: I1003 19:41:40.692782    1312 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-277907"
	Oct 03 19:41:41 newest-cni-277907 kubelet[1312]: I1003 19:41:41.374709    1312 apiserver.go:52] "Watching apiserver"
	Oct 03 19:41:41 newest-cni-277907 kubelet[1312]: I1003 19:41:41.427673    1312 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 03 19:41:41 newest-cni-277907 kubelet[1312]: I1003 19:41:41.530812    1312 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-277907"
	Oct 03 19:41:41 newest-cni-277907 kubelet[1312]: I1003 19:41:41.532121    1312 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-277907"
	Oct 03 19:41:41 newest-cni-277907 kubelet[1312]: E1003 19:41:41.611337    1312 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-277907\" already exists" pod="kube-system/kube-scheduler-newest-cni-277907"
	Oct 03 19:41:41 newest-cni-277907 kubelet[1312]: E1003 19:41:41.613633    1312 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-277907\" already exists" pod="kube-system/etcd-newest-cni-277907"
	Oct 03 19:41:41 newest-cni-277907 kubelet[1312]: I1003 19:41:41.712193    1312 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-277907" podStartSLOduration=1.712039736 podStartE2EDuration="1.712039736s" podCreationTimestamp="2025-10-03 19:41:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-03 19:41:41.711651118 +0000 UTC m=+1.477774288" watchObservedRunningTime="2025-10-03 19:41:41.712039736 +0000 UTC m=+1.478162905"
	Oct 03 19:41:41 newest-cni-277907 kubelet[1312]: I1003 19:41:41.782375    1312 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-277907" podStartSLOduration=1.782259144 podStartE2EDuration="1.782259144s" podCreationTimestamp="2025-10-03 19:41:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-03 19:41:41.758407351 +0000 UTC m=+1.524530529" watchObservedRunningTime="2025-10-03 19:41:41.782259144 +0000 UTC m=+1.548382322"
	Oct 03 19:41:41 newest-cni-277907 kubelet[1312]: I1003 19:41:41.893732    1312 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-277907" podStartSLOduration=1.8937132559999998 podStartE2EDuration="1.893713256s" podCreationTimestamp="2025-10-03 19:41:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-03 19:41:41.842962729 +0000 UTC m=+1.609085907" watchObservedRunningTime="2025-10-03 19:41:41.893713256 +0000 UTC m=+1.659836434"
	Oct 03 19:41:41 newest-cni-277907 kubelet[1312]: I1003 19:41:41.964090    1312 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-277907" podStartSLOduration=1.964069996 podStartE2EDuration="1.964069996s" podCreationTimestamp="2025-10-03 19:41:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-03 19:41:41.894034065 +0000 UTC m=+1.660157243" watchObservedRunningTime="2025-10-03 19:41:41.964069996 +0000 UTC m=+1.730193215"
	Oct 03 19:41:44 newest-cni-277907 kubelet[1312]: I1003 19:41:44.775844    1312 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 03 19:41:44 newest-cni-277907 kubelet[1312]: I1003 19:41:44.776406    1312 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 03 19:41:45 newest-cni-277907 kubelet[1312]: I1003 19:41:45.881644    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3e843f2f-9e62-4da8-a413-b23a4e8c33ef-kube-proxy\") pod \"kube-proxy-2ss46\" (UID: \"3e843f2f-9e62-4da8-a413-b23a4e8c33ef\") " pod="kube-system/kube-proxy-2ss46"
	Oct 03 19:41:45 newest-cni-277907 kubelet[1312]: I1003 19:41:45.882219    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e843f2f-9e62-4da8-a413-b23a4e8c33ef-lib-modules\") pod \"kube-proxy-2ss46\" (UID: \"3e843f2f-9e62-4da8-a413-b23a4e8c33ef\") " pod="kube-system/kube-proxy-2ss46"
	Oct 03 19:41:45 newest-cni-277907 kubelet[1312]: I1003 19:41:45.882367    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3e843f2f-9e62-4da8-a413-b23a4e8c33ef-xtables-lock\") pod \"kube-proxy-2ss46\" (UID: \"3e843f2f-9e62-4da8-a413-b23a4e8c33ef\") " pod="kube-system/kube-proxy-2ss46"
	Oct 03 19:41:45 newest-cni-277907 kubelet[1312]: I1003 19:41:45.882501    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xz4kb\" (UniqueName: \"kubernetes.io/projected/3e843f2f-9e62-4da8-a413-b23a4e8c33ef-kube-api-access-xz4kb\") pod \"kube-proxy-2ss46\" (UID: \"3e843f2f-9e62-4da8-a413-b23a4e8c33ef\") " pod="kube-system/kube-proxy-2ss46"
	Oct 03 19:41:45 newest-cni-277907 kubelet[1312]: I1003 19:41:45.882639    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/efbd6505-dbd9-4229-9f30-5de99ce9258e-cni-cfg\") pod \"kindnet-b6wxk\" (UID: \"efbd6505-dbd9-4229-9f30-5de99ce9258e\") " pod="kube-system/kindnet-b6wxk"
	Oct 03 19:41:45 newest-cni-277907 kubelet[1312]: I1003 19:41:45.882778    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/efbd6505-dbd9-4229-9f30-5de99ce9258e-xtables-lock\") pod \"kindnet-b6wxk\" (UID: \"efbd6505-dbd9-4229-9f30-5de99ce9258e\") " pod="kube-system/kindnet-b6wxk"
	Oct 03 19:41:45 newest-cni-277907 kubelet[1312]: I1003 19:41:45.882896    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/efbd6505-dbd9-4229-9f30-5de99ce9258e-lib-modules\") pod \"kindnet-b6wxk\" (UID: \"efbd6505-dbd9-4229-9f30-5de99ce9258e\") " pod="kube-system/kindnet-b6wxk"
	Oct 03 19:41:45 newest-cni-277907 kubelet[1312]: I1003 19:41:45.883010    1312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrxht\" (UniqueName: \"kubernetes.io/projected/efbd6505-dbd9-4229-9f30-5de99ce9258e-kube-api-access-nrxht\") pod \"kindnet-b6wxk\" (UID: \"efbd6505-dbd9-4229-9f30-5de99ce9258e\") " pod="kube-system/kindnet-b6wxk"
	Oct 03 19:41:46 newest-cni-277907 kubelet[1312]: I1003 19:41:46.022073    1312 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 03 19:41:46 newest-cni-277907 kubelet[1312]: W1003 19:41:46.163608    1312 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/8b59090431046f9d951b48ace59a9091019f835007d577cd4555f6908daa6561/crio-d0b7c76486754a977b080f6cbe2fc6aa0a56b0b5f05ffdb1275723760708f29e WatchSource:0}: Error finding container d0b7c76486754a977b080f6cbe2fc6aa0a56b0b5f05ffdb1275723760708f29e: Status 404 returned error can't find the container with id d0b7c76486754a977b080f6cbe2fc6aa0a56b0b5f05ffdb1275723760708f29e
	Oct 03 19:41:46 newest-cni-277907 kubelet[1312]: I1003 19:41:46.624389    1312 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2ss46" podStartSLOduration=1.62437055 podStartE2EDuration="1.62437055s" podCreationTimestamp="2025-10-03 19:41:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-03 19:41:46.624228599 +0000 UTC m=+6.390351777" watchObservedRunningTime="2025-10-03 19:41:46.62437055 +0000 UTC m=+6.390493720"
	Oct 03 19:41:47 newest-cni-277907 kubelet[1312]: I1003 19:41:47.598742    1312 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-b6wxk" podStartSLOduration=2.598725735 podStartE2EDuration="2.598725735s" podCreationTimestamp="2025-10-03 19:41:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-03 19:41:47.598432274 +0000 UTC m=+7.364555452" watchObservedRunningTime="2025-10-03 19:41:47.598725735 +0000 UTC m=+7.364848905"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-277907 -n newest-cni-277907
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-277907 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-qvbbr storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-277907 describe pod coredns-66bc5c9577-qvbbr storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-277907 describe pod coredns-66bc5c9577-qvbbr storage-provisioner: exit status 1 (111.792138ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-qvbbr" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-277907 describe pod coredns-66bc5c9577-qvbbr storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (3.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (6.43s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-277907 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p newest-cni-277907 --alsologtostderr -v=1: exit status 80 (2.096966612s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-277907 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 19:42:09.187296  498119 out.go:360] Setting OutFile to fd 1 ...
	I1003 19:42:09.187503  498119 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 19:42:09.187543  498119 out.go:374] Setting ErrFile to fd 2...
	I1003 19:42:09.187563  498119 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 19:42:09.187823  498119 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-284583/.minikube/bin
	I1003 19:42:09.188147  498119 out.go:368] Setting JSON to false
	I1003 19:42:09.188204  498119 mustload.go:65] Loading cluster: newest-cni-277907
	I1003 19:42:09.188616  498119 config.go:182] Loaded profile config "newest-cni-277907": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 19:42:09.189198  498119 cli_runner.go:164] Run: docker container inspect newest-cni-277907 --format={{.State.Status}}
	I1003 19:42:09.209940  498119 host.go:66] Checking if "newest-cni-277907" exists ...
	I1003 19:42:09.210274  498119 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 19:42:09.288523  498119 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-03 19:42:09.27885614 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1003 19:42:09.289227  498119 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-277907 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1003 19:42:09.295118  498119 out.go:179] * Pausing node newest-cni-277907 ... 
	I1003 19:42:09.297997  498119 host.go:66] Checking if "newest-cni-277907" exists ...
	I1003 19:42:09.298345  498119 ssh_runner.go:195] Run: systemctl --version
	I1003 19:42:09.298402  498119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-277907
	I1003 19:42:09.320522  498119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/newest-cni-277907/id_rsa Username:docker}
	I1003 19:42:09.432264  498119 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 19:42:09.453079  498119 pause.go:51] kubelet running: true
	I1003 19:42:09.453154  498119 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1003 19:42:09.700348  498119 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1003 19:42:09.700426  498119 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1003 19:42:09.814759  498119 cri.go:89] found id: "e9387afb5b6ce8d012cecb62f497dd44a46bfcfa85872e279424e14948ca19e3"
	I1003 19:42:09.814831  498119 cri.go:89] found id: "0bb2708b8a68c6bf83e7c6ebde209424b7a34780db11c23f8c8ee479b9536089"
	I1003 19:42:09.814850  498119 cri.go:89] found id: "e013c184e6e3ac3b12ebb1e788f88a522df87d865c2cded32ce1ba2140687d59"
	I1003 19:42:09.814869  498119 cri.go:89] found id: "ef5fea601208f50b53f6eef5d5284a014ca62a5cdc7ba7676e680d130cb543cb"
	I1003 19:42:09.814887  498119 cri.go:89] found id: "d54346ccf42f503b43643a2a4f2797f3f6219e7ebb4f15de4620be40f934e579"
	I1003 19:42:09.814923  498119 cri.go:89] found id: "19786ebd68db6b6c5bd023f0384178b772b9a909a9ca5278f768374892e103d8"
	I1003 19:42:09.814946  498119 cri.go:89] found id: ""
	I1003 19:42:09.815070  498119 ssh_runner.go:195] Run: sudo runc list -f json
	I1003 19:42:09.830687  498119 retry.go:31] will retry after 357.317348ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-03T19:42:09Z" level=error msg="open /run/runc: no such file or directory"
	I1003 19:42:10.189157  498119 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 19:42:10.205806  498119 pause.go:51] kubelet running: false
	I1003 19:42:10.205921  498119 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1003 19:42:10.418202  498119 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1003 19:42:10.418288  498119 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1003 19:42:10.501614  498119 cri.go:89] found id: "e9387afb5b6ce8d012cecb62f497dd44a46bfcfa85872e279424e14948ca19e3"
	I1003 19:42:10.501642  498119 cri.go:89] found id: "0bb2708b8a68c6bf83e7c6ebde209424b7a34780db11c23f8c8ee479b9536089"
	I1003 19:42:10.501648  498119 cri.go:89] found id: "e013c184e6e3ac3b12ebb1e788f88a522df87d865c2cded32ce1ba2140687d59"
	I1003 19:42:10.501652  498119 cri.go:89] found id: "ef5fea601208f50b53f6eef5d5284a014ca62a5cdc7ba7676e680d130cb543cb"
	I1003 19:42:10.501655  498119 cri.go:89] found id: "d54346ccf42f503b43643a2a4f2797f3f6219e7ebb4f15de4620be40f934e579"
	I1003 19:42:10.501661  498119 cri.go:89] found id: "19786ebd68db6b6c5bd023f0384178b772b9a909a9ca5278f768374892e103d8"
	I1003 19:42:10.501664  498119 cri.go:89] found id: ""
	I1003 19:42:10.501714  498119 ssh_runner.go:195] Run: sudo runc list -f json
	I1003 19:42:10.515588  498119 retry.go:31] will retry after 421.341799ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-03T19:42:10Z" level=error msg="open /run/runc: no such file or directory"
	I1003 19:42:10.937234  498119 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 19:42:10.950849  498119 pause.go:51] kubelet running: false
	I1003 19:42:10.950946  498119 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1003 19:42:11.106091  498119 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1003 19:42:11.106254  498119 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1003 19:42:11.179734  498119 cri.go:89] found id: "e9387afb5b6ce8d012cecb62f497dd44a46bfcfa85872e279424e14948ca19e3"
	I1003 19:42:11.179760  498119 cri.go:89] found id: "0bb2708b8a68c6bf83e7c6ebde209424b7a34780db11c23f8c8ee479b9536089"
	I1003 19:42:11.179766  498119 cri.go:89] found id: "e013c184e6e3ac3b12ebb1e788f88a522df87d865c2cded32ce1ba2140687d59"
	I1003 19:42:11.179769  498119 cri.go:89] found id: "ef5fea601208f50b53f6eef5d5284a014ca62a5cdc7ba7676e680d130cb543cb"
	I1003 19:42:11.179773  498119 cri.go:89] found id: "d54346ccf42f503b43643a2a4f2797f3f6219e7ebb4f15de4620be40f934e579"
	I1003 19:42:11.179776  498119 cri.go:89] found id: "19786ebd68db6b6c5bd023f0384178b772b9a909a9ca5278f768374892e103d8"
	I1003 19:42:11.179779  498119 cri.go:89] found id: ""
	I1003 19:42:11.179847  498119 ssh_runner.go:195] Run: sudo runc list -f json
	I1003 19:42:11.196103  498119 out.go:203] 
	W1003 19:42:11.199094  498119 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-03T19:42:11Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-03T19:42:11Z" level=error msg="open /run/runc: no such file or directory"
	
	W1003 19:42:11.199120  498119 out.go:285] * 
	* 
	W1003 19:42:11.206334  498119 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 19:42:11.209209  498119 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p newest-cni-277907 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-277907
helpers_test.go:243: (dbg) docker inspect newest-cni-277907:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8b59090431046f9d951b48ace59a9091019f835007d577cd4555f6908daa6561",
	        "Created": "2025-10-03T19:41:08.107758945Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 496473,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-03T19:41:54.432915553Z",
	            "FinishedAt": "2025-10-03T19:41:53.555563502Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/8b59090431046f9d951b48ace59a9091019f835007d577cd4555f6908daa6561/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8b59090431046f9d951b48ace59a9091019f835007d577cd4555f6908daa6561/hostname",
	        "HostsPath": "/var/lib/docker/containers/8b59090431046f9d951b48ace59a9091019f835007d577cd4555f6908daa6561/hosts",
	        "LogPath": "/var/lib/docker/containers/8b59090431046f9d951b48ace59a9091019f835007d577cd4555f6908daa6561/8b59090431046f9d951b48ace59a9091019f835007d577cd4555f6908daa6561-json.log",
	        "Name": "/newest-cni-277907",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-277907:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-277907",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8b59090431046f9d951b48ace59a9091019f835007d577cd4555f6908daa6561",
	                "LowerDir": "/var/lib/docker/overlay2/b6fb5b9dd131113b1ef3ef7a8465607ff85135a48ebecb8c77db75dd388bdc0a-init/diff:/var/lib/docker/overlay2/87b205803817b0b71a214d995ab7e10a92033bbf72d76d6e052f1d21ccecb313/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b6fb5b9dd131113b1ef3ef7a8465607ff85135a48ebecb8c77db75dd388bdc0a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b6fb5b9dd131113b1ef3ef7a8465607ff85135a48ebecb8c77db75dd388bdc0a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b6fb5b9dd131113b1ef3ef7a8465607ff85135a48ebecb8c77db75dd388bdc0a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-277907",
	                "Source": "/var/lib/docker/volumes/newest-cni-277907/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-277907",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-277907",
	                "name.minikube.sigs.k8s.io": "newest-cni-277907",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d003088b4efd9ccaec258d708e03119b245ed08d127fc940136f314794fee932",
	            "SandboxKey": "/var/run/docker/netns/d003088b4efd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33463"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33464"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33467"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33465"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33466"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-277907": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1a:b9:d5:55:df:6b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2fb89ec9f4b0d949e37566d491ff7c9e7ec5488e3271757158a55861f4d56349",
	                    "EndpointID": "ba9d77969dba2ed6ca106fed70744bafda1de7e24a2dcf561a6a7c9ec93aed7c",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-277907",
	                        "8b5909043104"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-277907 -n newest-cni-277907
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-277907 -n newest-cni-277907: exit status 2 (371.92877ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-277907 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-277907 logs -n 25: (1.259478339s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ image   │ no-preload-643397 image list --format=json                                                                                                                                                                                                    │ no-preload-643397            │ jenkins │ v1.37.0 │ 03 Oct 25 19:39 UTC │ 03 Oct 25 19:39 UTC │
	│ pause   │ -p no-preload-643397 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-643397            │ jenkins │ v1.37.0 │ 03 Oct 25 19:39 UTC │                     │
	│ delete  │ -p no-preload-643397                                                                                                                                                                                                                          │ no-preload-643397            │ jenkins │ v1.37.0 │ 03 Oct 25 19:39 UTC │ 03 Oct 25 19:39 UTC │
	│ delete  │ -p no-preload-643397                                                                                                                                                                                                                          │ no-preload-643397            │ jenkins │ v1.37.0 │ 03 Oct 25 19:39 UTC │ 03 Oct 25 19:39 UTC │
	│ delete  │ -p disable-driver-mounts-839513                                                                                                                                                                                                               │ disable-driver-mounts-839513 │ jenkins │ v1.37.0 │ 03 Oct 25 19:39 UTC │ 03 Oct 25 19:39 UTC │
	│ start   │ -p default-k8s-diff-port-842797 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-842797 │ jenkins │ v1.37.0 │ 03 Oct 25 19:39 UTC │ 03 Oct 25 19:40 UTC │
	│ addons  │ enable metrics-server -p embed-certs-327416 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-327416           │ jenkins │ v1.37.0 │ 03 Oct 25 19:39 UTC │                     │
	│ stop    │ -p embed-certs-327416 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-327416           │ jenkins │ v1.37.0 │ 03 Oct 25 19:39 UTC │ 03 Oct 25 19:39 UTC │
	│ addons  │ enable dashboard -p embed-certs-327416 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-327416           │ jenkins │ v1.37.0 │ 03 Oct 25 19:39 UTC │ 03 Oct 25 19:39 UTC │
	│ start   │ -p embed-certs-327416 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-327416           │ jenkins │ v1.37.0 │ 03 Oct 25 19:39 UTC │ 03 Oct 25 19:40 UTC │
	│ image   │ embed-certs-327416 image list --format=json                                                                                                                                                                                                   │ embed-certs-327416           │ jenkins │ v1.37.0 │ 03 Oct 25 19:40 UTC │ 03 Oct 25 19:40 UTC │
	│ pause   │ -p embed-certs-327416 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-327416           │ jenkins │ v1.37.0 │ 03 Oct 25 19:40 UTC │                     │
	│ delete  │ -p embed-certs-327416                                                                                                                                                                                                                         │ embed-certs-327416           │ jenkins │ v1.37.0 │ 03 Oct 25 19:40 UTC │ 03 Oct 25 19:41 UTC │
	│ delete  │ -p embed-certs-327416                                                                                                                                                                                                                         │ embed-certs-327416           │ jenkins │ v1.37.0 │ 03 Oct 25 19:41 UTC │ 03 Oct 25 19:41 UTC │
	│ start   │ -p newest-cni-277907 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-277907            │ jenkins │ v1.37.0 │ 03 Oct 25 19:41 UTC │ 03 Oct 25 19:41 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-842797 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-842797 │ jenkins │ v1.37.0 │ 03 Oct 25 19:41 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-842797 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-842797 │ jenkins │ v1.37.0 │ 03 Oct 25 19:41 UTC │ 03 Oct 25 19:41 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-842797 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-842797 │ jenkins │ v1.37.0 │ 03 Oct 25 19:41 UTC │ 03 Oct 25 19:41 UTC │
	│ start   │ -p default-k8s-diff-port-842797 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-842797 │ jenkins │ v1.37.0 │ 03 Oct 25 19:41 UTC │ 03 Oct 25 19:42 UTC │
	│ addons  │ enable metrics-server -p newest-cni-277907 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-277907            │ jenkins │ v1.37.0 │ 03 Oct 25 19:41 UTC │                     │
	│ stop    │ -p newest-cni-277907 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-277907            │ jenkins │ v1.37.0 │ 03 Oct 25 19:41 UTC │ 03 Oct 25 19:41 UTC │
	│ addons  │ enable dashboard -p newest-cni-277907 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-277907            │ jenkins │ v1.37.0 │ 03 Oct 25 19:41 UTC │ 03 Oct 25 19:41 UTC │
	│ start   │ -p newest-cni-277907 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-277907            │ jenkins │ v1.37.0 │ 03 Oct 25 19:41 UTC │ 03 Oct 25 19:42 UTC │
	│ image   │ newest-cni-277907 image list --format=json                                                                                                                                                                                                    │ newest-cni-277907            │ jenkins │ v1.37.0 │ 03 Oct 25 19:42 UTC │ 03 Oct 25 19:42 UTC │
	│ pause   │ -p newest-cni-277907 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-277907            │ jenkins │ v1.37.0 │ 03 Oct 25 19:42 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/03 19:41:54
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 19:41:54.120965  496330 out.go:360] Setting OutFile to fd 1 ...
	I1003 19:41:54.121078  496330 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 19:41:54.121089  496330 out.go:374] Setting ErrFile to fd 2...
	I1003 19:41:54.121094  496330 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 19:41:54.121354  496330 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-284583/.minikube/bin
	I1003 19:41:54.121779  496330 out.go:368] Setting JSON to false
	I1003 19:41:54.122774  496330 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8666,"bootTime":1759511849,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1003 19:41:54.122843  496330 start.go:140] virtualization:  
	I1003 19:41:54.127960  496330 out.go:179] * [newest-cni-277907] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1003 19:41:54.131185  496330 out.go:179]   - MINIKUBE_LOCATION=21625
	I1003 19:41:54.131233  496330 notify.go:220] Checking for updates...
	I1003 19:41:54.137573  496330 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 19:41:54.140460  496330 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21625-284583/kubeconfig
	I1003 19:41:54.143316  496330 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-284583/.minikube
	I1003 19:41:54.146169  496330 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1003 19:41:54.149090  496330 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 19:41:54.152535  496330 config.go:182] Loaded profile config "newest-cni-277907": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 19:41:54.153202  496330 driver.go:421] Setting default libvirt URI to qemu:///system
	I1003 19:41:54.188751  496330 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1003 19:41:54.188872  496330 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 19:41:54.255033  496330 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-03 19:41:54.245853078 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1003 19:41:54.255140  496330 docker.go:318] overlay module found
	I1003 19:41:54.258263  496330 out.go:179] * Using the docker driver based on existing profile
	I1003 19:41:54.261145  496330 start.go:304] selected driver: docker
	I1003 19:41:54.261164  496330 start.go:924] validating driver "docker" against &{Name:newest-cni-277907 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-277907 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 19:41:54.261259  496330 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 19:41:54.262011  496330 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 19:41:54.317634  496330 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-03 19:41:54.308339117 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1003 19:41:54.317983  496330 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1003 19:41:54.318017  496330 cni.go:84] Creating CNI manager for ""
	I1003 19:41:54.318080  496330 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 19:41:54.318211  496330 start.go:348] cluster config:
	{Name:newest-cni-277907 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-277907 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 19:41:54.321284  496330 out.go:179] * Starting "newest-cni-277907" primary control-plane node in "newest-cni-277907" cluster
	I1003 19:41:54.324172  496330 cache.go:123] Beginning downloading kic base image for docker with crio
	I1003 19:41:54.326884  496330 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1003 19:41:54.329849  496330 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 19:41:54.329905  496330 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21625-284583/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1003 19:41:54.329918  496330 cache.go:58] Caching tarball of preloaded images
	I1003 19:41:54.329931  496330 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1003 19:41:54.329998  496330 preload.go:233] Found /home/jenkins/minikube-integration/21625-284583/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1003 19:41:54.330008  496330 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1003 19:41:54.330125  496330 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/newest-cni-277907/config.json ...
	I1003 19:41:54.364203  496330 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1003 19:41:54.364223  496330 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1003 19:41:54.364236  496330 cache.go:232] Successfully downloaded all kic artifacts
	I1003 19:41:54.364261  496330 start.go:360] acquireMachinesLock for newest-cni-277907: {Name:mkd134b602e6b475d420a69856bbf9b26bf807b4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 19:41:54.364311  496330 start.go:364] duration metric: took 33.354µs to acquireMachinesLock for "newest-cni-277907"
	I1003 19:41:54.364329  496330 start.go:96] Skipping create...Using existing machine configuration
	I1003 19:41:54.364335  496330 fix.go:54] fixHost starting: 
	I1003 19:41:54.364600  496330 cli_runner.go:164] Run: docker container inspect newest-cni-277907 --format={{.State.Status}}
	I1003 19:41:54.398823  496330 fix.go:112] recreateIfNeeded on newest-cni-277907: state=Stopped err=<nil>
	W1003 19:41:54.398859  496330 fix.go:138] unexpected machine state, will restart: <nil>
	W1003 19:41:53.665999  492927 pod_ready.go:104] pod "coredns-66bc5c9577-l8knz" is not "Ready", error: <nil>
	W1003 19:41:56.165748  492927 pod_ready.go:104] pod "coredns-66bc5c9577-l8knz" is not "Ready", error: <nil>
	W1003 19:41:58.165851  492927 pod_ready.go:104] pod "coredns-66bc5c9577-l8knz" is not "Ready", error: <nil>
	I1003 19:41:54.401832  496330 out.go:252] * Restarting existing docker container for "newest-cni-277907" ...
	I1003 19:41:54.401926  496330 cli_runner.go:164] Run: docker start newest-cni-277907
	I1003 19:41:54.665793  496330 cli_runner.go:164] Run: docker container inspect newest-cni-277907 --format={{.State.Status}}
	I1003 19:41:54.686555  496330 kic.go:430] container "newest-cni-277907" state is running.
	I1003 19:41:54.686942  496330 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-277907
	I1003 19:41:54.706657  496330 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/newest-cni-277907/config.json ...
	I1003 19:41:54.706894  496330 machine.go:93] provisionDockerMachine start ...
	I1003 19:41:54.706959  496330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-277907
	I1003 19:41:54.728116  496330 main.go:141] libmachine: Using SSH client type: native
	I1003 19:41:54.728468  496330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1003 19:41:54.728481  496330 main.go:141] libmachine: About to run SSH command:
	hostname
	I1003 19:41:54.731067  496330 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43580->127.0.0.1:33463: read: connection reset by peer
	I1003 19:41:57.868675  496330 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-277907
	
	I1003 19:41:57.868707  496330 ubuntu.go:182] provisioning hostname "newest-cni-277907"
	I1003 19:41:57.868848  496330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-277907
	I1003 19:41:57.889499  496330 main.go:141] libmachine: Using SSH client type: native
	I1003 19:41:57.889827  496330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1003 19:41:57.889853  496330 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-277907 && echo "newest-cni-277907" | sudo tee /etc/hostname
	I1003 19:41:58.032204  496330 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-277907
	
	I1003 19:41:58.032281  496330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-277907
	I1003 19:41:58.051112  496330 main.go:141] libmachine: Using SSH client type: native
	I1003 19:41:58.051408  496330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1003 19:41:58.051425  496330 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-277907' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-277907/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-277907' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 19:41:58.184815  496330 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 19:41:58.184839  496330 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21625-284583/.minikube CaCertPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21625-284583/.minikube}
	I1003 19:41:58.184869  496330 ubuntu.go:190] setting up certificates
	I1003 19:41:58.184886  496330 provision.go:84] configureAuth start
	I1003 19:41:58.184956  496330 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-277907
	I1003 19:41:58.203787  496330 provision.go:143] copyHostCerts
	I1003 19:41:58.203866  496330 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-284583/.minikube/ca.pem, removing ...
	I1003 19:41:58.203890  496330 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-284583/.minikube/ca.pem
	I1003 19:41:58.203975  496330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21625-284583/.minikube/ca.pem (1082 bytes)
	I1003 19:41:58.204074  496330 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-284583/.minikube/cert.pem, removing ...
	I1003 19:41:58.204085  496330 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-284583/.minikube/cert.pem
	I1003 19:41:58.204114  496330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21625-284583/.minikube/cert.pem (1123 bytes)
	I1003 19:41:58.204172  496330 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-284583/.minikube/key.pem, removing ...
	I1003 19:41:58.204187  496330 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-284583/.minikube/key.pem
	I1003 19:41:58.204211  496330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21625-284583/.minikube/key.pem (1675 bytes)
	I1003 19:41:58.204274  496330 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21625-284583/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca-key.pem org=jenkins.newest-cni-277907 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-277907]
	I1003 19:41:58.423390  496330 provision.go:177] copyRemoteCerts
	I1003 19:41:58.423459  496330 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 19:41:58.423510  496330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-277907
	I1003 19:41:58.441519  496330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/newest-cni-277907/id_rsa Username:docker}
	I1003 19:41:58.537115  496330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 19:41:58.556082  496330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1003 19:41:58.574450  496330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1003 19:41:58.596238  496330 provision.go:87] duration metric: took 411.324935ms to configureAuth
	I1003 19:41:58.596308  496330 ubuntu.go:206] setting minikube options for container-runtime
	I1003 19:41:58.596516  496330 config.go:182] Loaded profile config "newest-cni-277907": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 19:41:58.596643  496330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-277907
	I1003 19:41:58.615214  496330 main.go:141] libmachine: Using SSH client type: native
	I1003 19:41:58.615545  496330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1003 19:41:58.615565  496330 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1003 19:41:58.896907  496330 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1003 19:41:58.896932  496330 machine.go:96] duration metric: took 4.190024679s to provisionDockerMachine
	I1003 19:41:58.896943  496330 start.go:293] postStartSetup for "newest-cni-277907" (driver="docker")
	I1003 19:41:58.896954  496330 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 19:41:58.897017  496330 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 19:41:58.897064  496330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-277907
	I1003 19:41:58.920025  496330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/newest-cni-277907/id_rsa Username:docker}
	I1003 19:41:59.016941  496330 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 19:41:59.020542  496330 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1003 19:41:59.020572  496330 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1003 19:41:59.020584  496330 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-284583/.minikube/addons for local assets ...
	I1003 19:41:59.020639  496330 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-284583/.minikube/files for local assets ...
	I1003 19:41:59.020720  496330 filesync.go:149] local asset: /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem -> 2864342.pem in /etc/ssl/certs
	I1003 19:41:59.020854  496330 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 19:41:59.028219  496330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem --> /etc/ssl/certs/2864342.pem (1708 bytes)
	I1003 19:41:59.046931  496330 start.go:296] duration metric: took 149.973159ms for postStartSetup
	I1003 19:41:59.047037  496330 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 19:41:59.047082  496330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-277907
	I1003 19:41:59.065427  496330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/newest-cni-277907/id_rsa Username:docker}
	I1003 19:41:59.170122  496330 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1003 19:41:59.174964  496330 fix.go:56] duration metric: took 4.810622307s for fixHost
	I1003 19:41:59.174997  496330 start.go:83] releasing machines lock for "newest-cni-277907", held for 4.810670061s
	I1003 19:41:59.175068  496330 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-277907
	I1003 19:41:59.192041  496330 ssh_runner.go:195] Run: cat /version.json
	I1003 19:41:59.192077  496330 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 19:41:59.192091  496330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-277907
	I1003 19:41:59.192130  496330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-277907
	I1003 19:41:59.210688  496330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/newest-cni-277907/id_rsa Username:docker}
	I1003 19:41:59.222576  496330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/newest-cni-277907/id_rsa Username:docker}
	I1003 19:41:59.308358  496330 ssh_runner.go:195] Run: systemctl --version
	I1003 19:41:59.406435  496330 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1003 19:41:59.454720  496330 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1003 19:41:59.459168  496330 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 19:41:59.459311  496330 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 19:41:59.467589  496330 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1003 19:41:59.467665  496330 start.go:495] detecting cgroup driver to use...
	I1003 19:41:59.467723  496330 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1003 19:41:59.467802  496330 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 19:41:59.483706  496330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 19:41:59.496702  496330 docker.go:218] disabling cri-docker service (if available) ...
	I1003 19:41:59.496803  496330 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1003 19:41:59.513232  496330 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1003 19:41:59.526862  496330 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1003 19:41:59.638847  496330 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1003 19:41:59.750268  496330 docker.go:234] disabling docker service ...
	I1003 19:41:59.750367  496330 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1003 19:41:59.767170  496330 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1003 19:41:59.780683  496330 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1003 19:41:59.902582  496330 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1003 19:42:00.025665  496330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1003 19:42:00.086131  496330 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 19:42:00.145494  496330 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1003 19:42:00.145678  496330 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:42:00.164046  496330 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1003 19:42:00.164124  496330 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:42:00.180462  496330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:42:00.227934  496330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:42:00.275054  496330 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 19:42:00.300899  496330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:42:00.333134  496330 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:42:00.357490  496330 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:42:00.376363  496330 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 19:42:00.387152  496330 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 19:42:00.397267  496330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 19:42:00.542772  496330 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1003 19:42:00.681682  496330 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1003 19:42:00.681804  496330 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1003 19:42:00.685936  496330 start.go:563] Will wait 60s for crictl version
	I1003 19:42:00.686043  496330 ssh_runner.go:195] Run: which crictl
	I1003 19:42:00.689721  496330 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1003 19:42:00.721913  496330 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1003 19:42:00.722107  496330 ssh_runner.go:195] Run: crio --version
	I1003 19:42:00.756621  496330 ssh_runner.go:195] Run: crio --version
	I1003 19:42:00.790729  496330 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1003 19:42:00.793891  496330 cli_runner.go:164] Run: docker network inspect newest-cni-277907 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 19:42:00.810171  496330 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1003 19:42:00.813899  496330 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 19:42:00.826760  496330 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1003 19:42:00.829539  496330 kubeadm.go:883] updating cluster {Name:newest-cni-277907 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-277907 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1003 19:42:00.829687  496330 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 19:42:00.829779  496330 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 19:42:00.863741  496330 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 19:42:00.863765  496330 crio.go:433] Images already preloaded, skipping extraction
	I1003 19:42:00.863825  496330 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 19:42:00.889508  496330 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 19:42:00.889534  496330 cache_images.go:85] Images are preloaded, skipping loading
	I1003 19:42:00.889542  496330 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1003 19:42:00.889650  496330 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-277907 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-277907 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1003 19:42:00.889738  496330 ssh_runner.go:195] Run: crio config
	I1003 19:42:00.949329  496330 cni.go:84] Creating CNI manager for ""
	I1003 19:42:00.949353  496330 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 19:42:00.949372  496330 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I1003 19:42:00.949395  496330 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-277907 NodeName:newest-cni-277907 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1003 19:42:00.949524  496330 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-277907"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 19:42:00.949599  496330 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1003 19:42:00.958374  496330 binaries.go:44] Found k8s binaries, skipping transfer
	I1003 19:42:00.958526  496330 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1003 19:42:00.966282  496330 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1003 19:42:00.979145  496330 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 19:42:00.992933  496330 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1003 19:42:01.008865  496330 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1003 19:42:01.012924  496330 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 19:42:01.023121  496330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 19:42:01.143635  496330 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 19:42:01.169243  496330 certs.go:69] Setting up /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/newest-cni-277907 for IP: 192.168.85.2
	I1003 19:42:01.169262  496330 certs.go:195] generating shared ca certs ...
	I1003 19:42:01.169278  496330 certs.go:227] acquiring lock for ca certs: {Name:mk5a10e6c921326e9c211447576eaeb893259ba7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:42:01.169433  496330 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21625-284583/.minikube/ca.key
	I1003 19:42:01.169477  496330 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.key
	I1003 19:42:01.169485  496330 certs.go:257] generating profile certs ...
	I1003 19:42:01.169578  496330 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/newest-cni-277907/client.key
	I1003 19:42:01.169670  496330 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/newest-cni-277907/apiserver.key.e8e82bd7
	I1003 19:42:01.169719  496330 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/newest-cni-277907/proxy-client.key
	I1003 19:42:01.169843  496330 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/286434.pem (1338 bytes)
	W1003 19:42:01.169873  496330 certs.go:480] ignoring /home/jenkins/minikube-integration/21625-284583/.minikube/certs/286434_empty.pem, impossibly tiny 0 bytes
	I1003 19:42:01.169880  496330 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 19:42:01.169909  496330 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem (1082 bytes)
	I1003 19:42:01.169932  496330 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem (1123 bytes)
	I1003 19:42:01.169954  496330 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/key.pem (1675 bytes)
	I1003 19:42:01.170005  496330 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem (1708 bytes)
	I1003 19:42:01.170652  496330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 19:42:01.193794  496330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1003 19:42:01.216859  496330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 19:42:01.253269  496330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1003 19:42:01.277760  496330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/newest-cni-277907/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1003 19:42:01.304532  496330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/newest-cni-277907/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1003 19:42:01.334477  496330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/newest-cni-277907/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 19:42:01.362611  496330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/newest-cni-277907/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1003 19:42:01.386292  496330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/certs/286434.pem --> /usr/share/ca-certificates/286434.pem (1338 bytes)
	I1003 19:42:01.405475  496330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem --> /usr/share/ca-certificates/2864342.pem (1708 bytes)
	I1003 19:42:01.425513  496330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 19:42:01.445188  496330 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 19:42:01.459720  496330 ssh_runner.go:195] Run: openssl version
	I1003 19:42:01.466204  496330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 19:42:01.477623  496330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 19:42:01.482378  496330 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  3 18:27 /usr/share/ca-certificates/minikubeCA.pem
	I1003 19:42:01.482446  496330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 19:42:01.524975  496330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 19:42:01.532964  496330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/286434.pem && ln -fs /usr/share/ca-certificates/286434.pem /etc/ssl/certs/286434.pem"
	I1003 19:42:01.541283  496330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/286434.pem
	I1003 19:42:01.545398  496330 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  3 18:34 /usr/share/ca-certificates/286434.pem
	I1003 19:42:01.545503  496330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/286434.pem
	I1003 19:42:01.586675  496330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/286434.pem /etc/ssl/certs/51391683.0"
	I1003 19:42:01.595032  496330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2864342.pem && ln -fs /usr/share/ca-certificates/2864342.pem /etc/ssl/certs/2864342.pem"
	I1003 19:42:01.603895  496330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2864342.pem
	I1003 19:42:01.608059  496330 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  3 18:34 /usr/share/ca-certificates/2864342.pem
	I1003 19:42:01.608127  496330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2864342.pem
	I1003 19:42:01.650769  496330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2864342.pem /etc/ssl/certs/3ec20f2e.0"
	I1003 19:42:01.659797  496330 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 19:42:01.665334  496330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1003 19:42:01.707667  496330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1003 19:42:01.749291  496330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1003 19:42:01.791218  496330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1003 19:42:01.843750  496330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1003 19:42:01.893452  496330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1003 19:42:01.989796  496330 kubeadm.go:400] StartCluster: {Name:newest-cni-277907 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-277907 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 19:42:01.989946  496330 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1003 19:42:01.990056  496330 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1003 19:42:02.053220  496330 cri.go:89] found id: "e013c184e6e3ac3b12ebb1e788f88a522df87d865c2cded32ce1ba2140687d59"
	I1003 19:42:02.053298  496330 cri.go:89] found id: "ef5fea601208f50b53f6eef5d5284a014ca62a5cdc7ba7676e680d130cb543cb"
	I1003 19:42:02.053319  496330 cri.go:89] found id: "d54346ccf42f503b43643a2a4f2797f3f6219e7ebb4f15de4620be40f934e579"
	I1003 19:42:02.053338  496330 cri.go:89] found id: "19786ebd68db6b6c5bd023f0384178b772b9a909a9ca5278f768374892e103d8"
	I1003 19:42:02.053370  496330 cri.go:89] found id: ""
	I1003 19:42:02.053465  496330 ssh_runner.go:195] Run: sudo runc list -f json
	W1003 19:42:02.077433  496330 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-03T19:42:02Z" level=error msg="open /run/runc: no such file or directory"
	I1003 19:42:02.077523  496330 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 19:42:02.095110  496330 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1003 19:42:02.095141  496330 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1003 19:42:02.095193  496330 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1003 19:42:02.106853  496330 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1003 19:42:02.107444  496330 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-277907" does not appear in /home/jenkins/minikube-integration/21625-284583/kubeconfig
	I1003 19:42:02.107712  496330 kubeconfig.go:62] /home/jenkins/minikube-integration/21625-284583/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-277907" cluster setting kubeconfig missing "newest-cni-277907" context setting]
	I1003 19:42:02.108230  496330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/kubeconfig: {Name:mkc1323fd87f4a78231a26d2dab0dff7feecf1e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:42:02.109732  496330 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1003 19:42:02.134182  496330 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1003 19:42:02.134213  496330 kubeadm.go:601] duration metric: took 39.065513ms to restartPrimaryControlPlane
	I1003 19:42:02.134223  496330 kubeadm.go:402] duration metric: took 144.434833ms to StartCluster
	I1003 19:42:02.134237  496330 settings.go:142] acquiring lock: {Name:mkc95577dbc448e3409dfa2b5e53a3a1327cb451 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:42:02.134309  496330 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21625-284583/kubeconfig
	I1003 19:42:02.135258  496330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/kubeconfig: {Name:mkc1323fd87f4a78231a26d2dab0dff7feecf1e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:42:02.135496  496330 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1003 19:42:02.135709  496330 config.go:182] Loaded profile config "newest-cni-277907": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 19:42:02.135846  496330 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1003 19:42:02.135920  496330 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-277907"
	I1003 19:42:02.135941  496330 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-277907"
	W1003 19:42:02.135950  496330 addons.go:247] addon storage-provisioner should already be in state true
	I1003 19:42:02.135973  496330 host.go:66] Checking if "newest-cni-277907" exists ...
	I1003 19:42:02.136029  496330 addons.go:69] Setting dashboard=true in profile "newest-cni-277907"
	I1003 19:42:02.136079  496330 addons.go:238] Setting addon dashboard=true in "newest-cni-277907"
	W1003 19:42:02.136100  496330 addons.go:247] addon dashboard should already be in state true
	I1003 19:42:02.136151  496330 host.go:66] Checking if "newest-cni-277907" exists ...
	I1003 19:42:02.136820  496330 cli_runner.go:164] Run: docker container inspect newest-cni-277907 --format={{.State.Status}}
	I1003 19:42:02.137128  496330 cli_runner.go:164] Run: docker container inspect newest-cni-277907 --format={{.State.Status}}
	I1003 19:42:02.138228  496330 addons.go:69] Setting default-storageclass=true in profile "newest-cni-277907"
	I1003 19:42:02.138257  496330 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-277907"
	I1003 19:42:02.138655  496330 cli_runner.go:164] Run: docker container inspect newest-cni-277907 --format={{.State.Status}}
	I1003 19:42:02.146641  496330 out.go:179] * Verifying Kubernetes components...
	I1003 19:42:02.149848  496330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 19:42:02.206184  496330 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1003 19:42:02.210108  496330 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1003 19:42:02.213723  496330 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1003 19:42:00.171338  492927 pod_ready.go:104] pod "coredns-66bc5c9577-l8knz" is not "Ready", error: <nil>
	W1003 19:42:02.174293  492927 pod_ready.go:104] pod "coredns-66bc5c9577-l8knz" is not "Ready", error: <nil>
	I1003 19:42:02.213725  496330 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1003 19:42:02.213834  496330 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1003 19:42:02.213913  496330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-277907
	I1003 19:42:02.216886  496330 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 19:42:02.216911  496330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1003 19:42:02.216976  496330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-277907
	I1003 19:42:02.237714  496330 addons.go:238] Setting addon default-storageclass=true in "newest-cni-277907"
	W1003 19:42:02.237740  496330 addons.go:247] addon default-storageclass should already be in state true
	I1003 19:42:02.237765  496330 host.go:66] Checking if "newest-cni-277907" exists ...
	I1003 19:42:02.238177  496330 cli_runner.go:164] Run: docker container inspect newest-cni-277907 --format={{.State.Status}}
	I1003 19:42:02.265005  496330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/newest-cni-277907/id_rsa Username:docker}
	I1003 19:42:02.273019  496330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/newest-cni-277907/id_rsa Username:docker}
	I1003 19:42:02.288819  496330 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1003 19:42:02.288841  496330 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1003 19:42:02.288902  496330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-277907
	I1003 19:42:02.318259  496330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/newest-cni-277907/id_rsa Username:docker}
	I1003 19:42:02.530550  496330 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 19:42:02.541602  496330 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1003 19:42:02.541679  496330 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1003 19:42:02.557502  496330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 19:42:02.579345  496330 api_server.go:52] waiting for apiserver process to appear ...
	I1003 19:42:02.579417  496330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 19:42:02.597846  496330 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1003 19:42:02.597871  496330 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1003 19:42:02.613301  496330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1003 19:42:02.648622  496330 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1003 19:42:02.648648  496330 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1003 19:42:02.731767  496330 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1003 19:42:02.731793  496330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1003 19:42:02.812627  496330 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1003 19:42:02.812651  496330 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1003 19:42:02.841688  496330 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1003 19:42:02.841734  496330 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1003 19:42:02.870058  496330 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1003 19:42:02.870083  496330 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1003 19:42:02.925263  496330 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1003 19:42:02.925287  496330 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1003 19:42:02.952871  496330 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1003 19:42:02.952897  496330 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1003 19:42:02.976325  496330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1003 19:42:07.745046  496330 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (5.165599909s)
	I1003 19:42:07.745078  496330 api_server.go:72] duration metric: took 5.609559819s to wait for apiserver process to appear ...
	I1003 19:42:07.745085  496330 api_server.go:88] waiting for apiserver healthz status ...
	I1003 19:42:07.745102  496330 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1003 19:42:07.745406  496330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.13207849s)
	I1003 19:42:07.746625  496330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.189054454s)
	I1003 19:42:07.776204  496330 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1003 19:42:07.776233  496330 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1003 19:42:07.810390  496330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.83401929s)
	I1003 19:42:07.813770  496330 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-277907 addons enable metrics-server
	
	I1003 19:42:07.816441  496330 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	W1003 19:42:04.666086  492927 pod_ready.go:104] pod "coredns-66bc5c9577-l8knz" is not "Ready", error: <nil>
	W1003 19:42:07.169781  492927 pod_ready.go:104] pod "coredns-66bc5c9577-l8knz" is not "Ready", error: <nil>
	I1003 19:42:07.819559  496330 addons.go:514] duration metric: took 5.683791267s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1003 19:42:08.245394  496330 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1003 19:42:08.258204  496330 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1003 19:42:08.259339  496330 api_server.go:141] control plane version: v1.34.1
	I1003 19:42:08.259408  496330 api_server.go:131] duration metric: took 514.315619ms to wait for apiserver health ...
	I1003 19:42:08.259433  496330 system_pods.go:43] waiting for kube-system pods to appear ...
	I1003 19:42:08.265595  496330 system_pods.go:59] 8 kube-system pods found
	I1003 19:42:08.265687  496330 system_pods.go:61] "coredns-66bc5c9577-qvbbr" [1cd277df-18e2-4280-aed7-5f55acbafa2e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1003 19:42:08.265713  496330 system_pods.go:61] "etcd-newest-cni-277907" [9a388045-313d-4a5e-a56a-c070a23d10f0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1003 19:42:08.265750  496330 system_pods.go:61] "kindnet-b6wxk" [efbd6505-dbd9-4229-9f30-5de99ce9258e] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1003 19:42:08.265779  496330 system_pods.go:61] "kube-apiserver-newest-cni-277907" [e333974e-7706-4dd3-a108-96d50d755815] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1003 19:42:08.265799  496330 system_pods.go:61] "kube-controller-manager-newest-cni-277907" [ca367ef6-21e7-49f2-bb9e-a73465e96941] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1003 19:42:08.265836  496330 system_pods.go:61] "kube-proxy-2ss46" [3e843f2f-9e62-4da8-a413-b23a4e8c33ef] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1003 19:42:08.265863  496330 system_pods.go:61] "kube-scheduler-newest-cni-277907" [7d578ea2-dbb0-4886-96d7-ed212ff4907a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1003 19:42:08.265884  496330 system_pods.go:61] "storage-provisioner" [da0d0bff-83e0-4502-b45b-5becfa549ef9] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1003 19:42:08.265918  496330 system_pods.go:74] duration metric: took 6.465857ms to wait for pod list to return data ...
	I1003 19:42:08.265946  496330 default_sa.go:34] waiting for default service account to be created ...
	I1003 19:42:08.269366  496330 default_sa.go:45] found service account: "default"
	I1003 19:42:08.269434  496330 default_sa.go:55] duration metric: took 3.458213ms for default service account to be created ...
	I1003 19:42:08.269460  496330 kubeadm.go:586] duration metric: took 6.133940613s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1003 19:42:08.269505  496330 node_conditions.go:102] verifying NodePressure condition ...
	I1003 19:42:08.272314  496330 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1003 19:42:08.272395  496330 node_conditions.go:123] node cpu capacity is 2
	I1003 19:42:08.272423  496330 node_conditions.go:105] duration metric: took 2.89488ms to run NodePressure ...
	I1003 19:42:08.272450  496330 start.go:241] waiting for startup goroutines ...
	I1003 19:42:08.272483  496330 start.go:246] waiting for cluster config update ...
	I1003 19:42:08.272511  496330 start.go:255] writing updated cluster config ...
	I1003 19:42:08.272895  496330 ssh_runner.go:195] Run: rm -f paused
	I1003 19:42:08.346481  496330 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1003 19:42:08.351677  496330 out.go:179] * Done! kubectl is now configured to use "newest-cni-277907" cluster and "default" namespace by default
	W1003 19:42:09.665900  492927 pod_ready.go:104] pod "coredns-66bc5c9577-l8knz" is not "Ready", error: <nil>
	I1003 19:42:10.182367  492927 pod_ready.go:94] pod "coredns-66bc5c9577-l8knz" is "Ready"
	I1003 19:42:10.182396  492927 pod_ready.go:86] duration metric: took 32.522452921s for pod "coredns-66bc5c9577-l8knz" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:42:10.190311  492927 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-842797" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:42:10.198116  492927 pod_ready.go:94] pod "etcd-default-k8s-diff-port-842797" is "Ready"
	I1003 19:42:10.198142  492927 pod_ready.go:86] duration metric: took 7.802707ms for pod "etcd-default-k8s-diff-port-842797" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:42:10.202592  492927 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-842797" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:42:10.209160  492927 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-842797" is "Ready"
	I1003 19:42:10.209186  492927 pod_ready.go:86] duration metric: took 6.567233ms for pod "kube-apiserver-default-k8s-diff-port-842797" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:42:10.213271  492927 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-842797" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:42:10.363451  492927 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-842797" is "Ready"
	I1003 19:42:10.363649  492927 pod_ready.go:86] duration metric: took 150.347798ms for pod "kube-controller-manager-default-k8s-diff-port-842797" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:42:10.564287  492927 pod_ready.go:83] waiting for pod "kube-proxy-gvslj" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:42:10.963683  492927 pod_ready.go:94] pod "kube-proxy-gvslj" is "Ready"
	I1003 19:42:10.963752  492927 pod_ready.go:86] duration metric: took 399.435374ms for pod "kube-proxy-gvslj" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:42:11.164167  492927 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-842797" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:42:11.564966  492927 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-842797" is "Ready"
	I1003 19:42:11.564991  492927 pod_ready.go:86] duration metric: took 400.7524ms for pod "kube-scheduler-default-k8s-diff-port-842797" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:42:11.565009  492927 pod_ready.go:40] duration metric: took 33.956882521s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1003 19:42:11.647394  492927 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1003 19:42:11.650572  492927 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-842797" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 03 19:42:07 newest-cni-277907 crio[611]: time="2025-10-03T19:42:07.583302276Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 19:42:07 newest-cni-277907 crio[611]: time="2025-10-03T19:42:07.586403789Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-2ss46/POD" id=cd6108ce-3776-45e1-b4b9-5849086da9e6 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 03 19:42:07 newest-cni-277907 crio[611]: time="2025-10-03T19:42:07.586472426Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 19:42:07 newest-cni-277907 crio[611]: time="2025-10-03T19:42:07.594082744Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=eac5b371-8cdb-4864-9920-d0bba20ea7be name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 03 19:42:07 newest-cni-277907 crio[611]: time="2025-10-03T19:42:07.595087305Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=cd6108ce-3776-45e1-b4b9-5849086da9e6 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 03 19:42:07 newest-cni-277907 crio[611]: time="2025-10-03T19:42:07.606732427Z" level=info msg="Ran pod sandbox b533272fbef9f4ef6ed9587f60d1578d56c725838904b1e44f52a8a47d9678d5 with infra container: kube-system/kindnet-b6wxk/POD" id=eac5b371-8cdb-4864-9920-d0bba20ea7be name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 03 19:42:07 newest-cni-277907 crio[611]: time="2025-10-03T19:42:07.60939646Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=3c2900a3-cd3b-435f-837a-dcde0fd7db94 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 19:42:07 newest-cni-277907 crio[611]: time="2025-10-03T19:42:07.614817427Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=d01da6a4-2116-4668-98a2-d9f1241c6674 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 19:42:07 newest-cni-277907 crio[611]: time="2025-10-03T19:42:07.615987751Z" level=info msg="Creating container: kube-system/kindnet-b6wxk/kindnet-cni" id=036098f8-aba0-4ebd-a38b-d4bd981e2137 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:42:07 newest-cni-277907 crio[611]: time="2025-10-03T19:42:07.616331699Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 19:42:07 newest-cni-277907 crio[611]: time="2025-10-03T19:42:07.624530392Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 19:42:07 newest-cni-277907 crio[611]: time="2025-10-03T19:42:07.635335221Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 19:42:07 newest-cni-277907 crio[611]: time="2025-10-03T19:42:07.635837104Z" level=info msg="Ran pod sandbox 71d20ad61a98aed1ed611bf5682d771a6aa665e8c02bdaf3e4dbf56b9d943263 with infra container: kube-system/kube-proxy-2ss46/POD" id=cd6108ce-3776-45e1-b4b9-5849086da9e6 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 03 19:42:07 newest-cni-277907 crio[611]: time="2025-10-03T19:42:07.652339541Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=4f15c193-fe95-4004-bea4-3b98af4e6255 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 19:42:07 newest-cni-277907 crio[611]: time="2025-10-03T19:42:07.653783297Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=b6c34717-56ae-492d-a41e-f3b782ba8285 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 19:42:07 newest-cni-277907 crio[611]: time="2025-10-03T19:42:07.654954211Z" level=info msg="Creating container: kube-system/kube-proxy-2ss46/kube-proxy" id=742cd70f-7c44-44a1-a981-9cc16118eab4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:42:07 newest-cni-277907 crio[611]: time="2025-10-03T19:42:07.655638833Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 19:42:07 newest-cni-277907 crio[611]: time="2025-10-03T19:42:07.67905154Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 19:42:07 newest-cni-277907 crio[611]: time="2025-10-03T19:42:07.679813472Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 19:42:07 newest-cni-277907 crio[611]: time="2025-10-03T19:42:07.724409928Z" level=info msg="Created container 0bb2708b8a68c6bf83e7c6ebde209424b7a34780db11c23f8c8ee479b9536089: kube-system/kindnet-b6wxk/kindnet-cni" id=036098f8-aba0-4ebd-a38b-d4bd981e2137 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:42:07 newest-cni-277907 crio[611]: time="2025-10-03T19:42:07.725354525Z" level=info msg="Starting container: 0bb2708b8a68c6bf83e7c6ebde209424b7a34780db11c23f8c8ee479b9536089" id=2b81076f-8c32-4347-a043-d9d9be39a8be name=/runtime.v1.RuntimeService/StartContainer
	Oct 03 19:42:07 newest-cni-277907 crio[611]: time="2025-10-03T19:42:07.731538113Z" level=info msg="Started container" PID=1061 containerID=0bb2708b8a68c6bf83e7c6ebde209424b7a34780db11c23f8c8ee479b9536089 description=kube-system/kindnet-b6wxk/kindnet-cni id=2b81076f-8c32-4347-a043-d9d9be39a8be name=/runtime.v1.RuntimeService/StartContainer sandboxID=b533272fbef9f4ef6ed9587f60d1578d56c725838904b1e44f52a8a47d9678d5
	Oct 03 19:42:07 newest-cni-277907 crio[611]: time="2025-10-03T19:42:07.816883039Z" level=info msg="Created container e9387afb5b6ce8d012cecb62f497dd44a46bfcfa85872e279424e14948ca19e3: kube-system/kube-proxy-2ss46/kube-proxy" id=742cd70f-7c44-44a1-a981-9cc16118eab4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:42:07 newest-cni-277907 crio[611]: time="2025-10-03T19:42:07.818075943Z" level=info msg="Starting container: e9387afb5b6ce8d012cecb62f497dd44a46bfcfa85872e279424e14948ca19e3" id=bea3cb77-b43a-4b98-9708-27642ccaca92 name=/runtime.v1.RuntimeService/StartContainer
	Oct 03 19:42:07 newest-cni-277907 crio[611]: time="2025-10-03T19:42:07.821497142Z" level=info msg="Started container" PID=1064 containerID=e9387afb5b6ce8d012cecb62f497dd44a46bfcfa85872e279424e14948ca19e3 description=kube-system/kube-proxy-2ss46/kube-proxy id=bea3cb77-b43a-4b98-9708-27642ccaca92 name=/runtime.v1.RuntimeService/StartContainer sandboxID=71d20ad61a98aed1ed611bf5682d771a6aa665e8c02bdaf3e4dbf56b9d943263
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	e9387afb5b6ce       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   4 seconds ago       Running             kube-proxy                1                   71d20ad61a98a       kube-proxy-2ss46                            kube-system
	0bb2708b8a68c       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   4 seconds ago       Running             kindnet-cni               1                   b533272fbef9f       kindnet-b6wxk                               kube-system
	e013c184e6e3a       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   10 seconds ago      Running             etcd                      1                   385cfb31fd940       etcd-newest-cni-277907                      kube-system
	ef5fea601208f       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   10 seconds ago      Running             kube-controller-manager   1                   2aec7d9732cb4       kube-controller-manager-newest-cni-277907   kube-system
	d54346ccf42f5       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   10 seconds ago      Running             kube-apiserver            1                   0687f047a497b       kube-apiserver-newest-cni-277907            kube-system
	19786ebd68db6       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   10 seconds ago      Running             kube-scheduler            1                   5e346fa738c07       kube-scheduler-newest-cni-277907            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-277907
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-277907
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a43873c79fc22f8b1ccd29d3dfa635d392b09335
	                    minikube.k8s.io/name=newest-cni-277907
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_03T19_41_41_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 03 Oct 2025 19:41:37 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-277907
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 03 Oct 2025 19:42:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 03 Oct 2025 19:42:06 +0000   Fri, 03 Oct 2025 19:41:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 03 Oct 2025 19:42:06 +0000   Fri, 03 Oct 2025 19:41:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 03 Oct 2025 19:42:06 +0000   Fri, 03 Oct 2025 19:41:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 03 Oct 2025 19:42:06 +0000   Fri, 03 Oct 2025 19:41:29 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-277907
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 aa9cc27629e84e84bad28b65f03df7b6
	  System UUID:                20e576e4-dd3f-4016-9b52-c906c3cc7f99
	  Boot ID:                    3762136e-8bec-4104-a5cb-0b1976f6048e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-277907                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         32s
	  kube-system                 kindnet-b6wxk                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-newest-cni-277907             250m (12%)    0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-newest-cni-277907    200m (10%)    0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-2ss46                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-newest-cni-277907             100m (5%)     0 (0%)      0 (0%)           0 (0%)         32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 25s                kube-proxy       
	  Normal   Starting                 4s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  44s (x8 over 44s)  kubelet          Node newest-cni-277907 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    44s (x8 over 44s)  kubelet          Node newest-cni-277907 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     44s (x8 over 44s)  kubelet          Node newest-cni-277907 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    32s                kubelet          Node newest-cni-277907 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 32s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  32s                kubelet          Node newest-cni-277907 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     32s                kubelet          Node newest-cni-277907 status is now: NodeHasSufficientPID
	  Normal   Starting                 32s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           28s                node-controller  Node newest-cni-277907 event: Registered Node newest-cni-277907 in Controller
	  Normal   Starting                 11s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 11s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  11s (x8 over 11s)  kubelet          Node newest-cni-277907 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11s (x8 over 11s)  kubelet          Node newest-cni-277907 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11s (x8 over 11s)  kubelet          Node newest-cni-277907 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           3s                 node-controller  Node newest-cni-277907 event: Registered Node newest-cni-277907 in Controller
	
	
	==> dmesg <==
	[ +24.839009] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:13] overlayfs: idmapped layers are currently not supported
	[ +26.493253] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:15] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:16] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:17] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000010] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Oct 3 19:18] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:20] overlayfs: idmapped layers are currently not supported
	[ +32.018892] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:22] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:24] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:26] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:32] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:34] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:35] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:36] overlayfs: idmapped layers are currently not supported
	[  +4.740983] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:38] overlayfs: idmapped layers are currently not supported
	[ +12.897300] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:39] overlayfs: idmapped layers are currently not supported
	[  +4.104516] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:41] overlayfs: idmapped layers are currently not supported
	[  +1.990678] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:42] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [e013c184e6e3ac3b12ebb1e788f88a522df87d865c2cded32ce1ba2140687d59] <==
	{"level":"warn","ts":"2025-10-03T19:42:04.454013Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:42:04.478775Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:42:04.495674Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:42:04.506969Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:42:04.537780Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:42:04.548384Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:42:04.567855Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:42:04.591694Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:42:04.635974Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:42:04.662926Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:42:04.700030Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:42:04.713102Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:42:04.727703Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:42:04.747432Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:42:04.759849Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:42:04.792768Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:42:04.803043Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:42:04.819061Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:42:04.836677Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:42:04.855192Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:42:04.892512Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:42:04.900400Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:42:04.932158Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:42:04.955220Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:42:05.131651Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51248","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 19:42:12 up  2:24,  0 user,  load average: 5.65, 3.92, 2.72
	Linux newest-cni-277907 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0bb2708b8a68c6bf83e7c6ebde209424b7a34780db11c23f8c8ee479b9536089] <==
	I1003 19:42:07.894809       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1003 19:42:07.895876       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1003 19:42:07.899908       1 main.go:148] setting mtu 1500 for CNI 
	I1003 19:42:07.899981       1 main.go:178] kindnetd IP family: "ipv4"
	I1003 19:42:07.900024       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-03T19:42:08Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1003 19:42:08.095569       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1003 19:42:08.095665       1 controller.go:381] "Waiting for informer caches to sync"
	I1003 19:42:08.095700       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1003 19:42:08.096611       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [d54346ccf42f503b43643a2a4f2797f3f6219e7ebb4f15de4620be40f934e579] <==
	I1003 19:42:06.259998       1 policy_source.go:240] refreshing policies
	I1003 19:42:06.279709       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1003 19:42:06.288488       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1003 19:42:06.288512       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1003 19:42:06.288608       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1003 19:42:06.288648       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1003 19:42:06.288679       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1003 19:42:06.309016       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1003 19:42:06.309224       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1003 19:42:06.380453       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1003 19:42:06.380943       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1003 19:42:06.440083       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1003 19:42:06.489371       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1003 19:42:06.931188       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1003 19:42:07.188692       1 controller.go:667] quota admission added evaluator for: namespaces
	I1003 19:42:07.264888       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1003 19:42:07.344097       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1003 19:42:07.396423       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1003 19:42:07.482785       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1003 19:42:07.776762       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.163.249"}
	I1003 19:42:07.802725       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.191.83"}
	I1003 19:42:09.674890       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1003 19:42:09.872192       1 controller.go:667] quota admission added evaluator for: endpoints
	I1003 19:42:10.030835       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1003 19:42:10.128014       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [ef5fea601208f50b53f6eef5d5284a014ca62a5cdc7ba7676e680d130cb543cb] <==
	I1003 19:42:09.499126       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1003 19:42:09.499211       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1003 19:42:09.502594       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1003 19:42:09.504828       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1003 19:42:09.507234       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1003 19:42:09.508364       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1003 19:42:09.511309       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1003 19:42:09.515864       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1003 19:42:09.515941       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1003 19:42:09.517106       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1003 19:42:09.517599       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1003 19:42:09.517699       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1003 19:42:09.521736       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1003 19:42:09.521817       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1003 19:42:09.521856       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1003 19:42:09.528145       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1003 19:42:09.529199       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1003 19:42:09.529314       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1003 19:42:09.531263       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1003 19:42:09.531285       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1003 19:42:09.549886       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1003 19:42:09.552062       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1003 19:42:09.552143       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1003 19:42:09.565483       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1003 19:42:09.567835       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	
	
	==> kube-proxy [e9387afb5b6ce8d012cecb62f497dd44a46bfcfa85872e279424e14948ca19e3] <==
	I1003 19:42:07.975330       1 server_linux.go:53] "Using iptables proxy"
	I1003 19:42:08.078273       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1003 19:42:08.188804       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1003 19:42:08.200881       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1003 19:42:08.205411       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1003 19:42:08.518931       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1003 19:42:08.519053       1 server_linux.go:132] "Using iptables Proxier"
	I1003 19:42:08.522933       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1003 19:42:08.523320       1 server.go:527] "Version info" version="v1.34.1"
	I1003 19:42:08.523503       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1003 19:42:08.524937       1 config.go:200] "Starting service config controller"
	I1003 19:42:08.525009       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1003 19:42:08.525051       1 config.go:106] "Starting endpoint slice config controller"
	I1003 19:42:08.525081       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1003 19:42:08.525114       1 config.go:403] "Starting serviceCIDR config controller"
	I1003 19:42:08.525142       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1003 19:42:08.525854       1 config.go:309] "Starting node config controller"
	I1003 19:42:08.525903       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1003 19:42:08.525930       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1003 19:42:08.625626       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1003 19:42:08.625763       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1003 19:42:08.625778       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [19786ebd68db6b6c5bd023f0384178b772b9a909a9ca5278f768374892e103d8] <==
	I1003 19:42:06.987303       1 serving.go:386] Generated self-signed cert in-memory
	I1003 19:42:08.586895       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1003 19:42:08.586933       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1003 19:42:08.592566       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1003 19:42:08.594588       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1003 19:42:08.594629       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1003 19:42:08.594654       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1003 19:42:08.597583       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1003 19:42:08.597606       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1003 19:42:08.597624       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1003 19:42:08.597631       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1003 19:42:08.695739       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1003 19:42:08.698356       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1003 19:42:08.698475       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 03 19:42:06 newest-cni-277907 kubelet[728]: I1003 19:42:06.568891     728 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-277907"
	Oct 03 19:42:06 newest-cni-277907 kubelet[728]: I1003 19:42:06.568982     728 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-277907"
	Oct 03 19:42:06 newest-cni-277907 kubelet[728]: I1003 19:42:06.569007     728 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 03 19:42:06 newest-cni-277907 kubelet[728]: I1003 19:42:06.570211     728 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 03 19:42:06 newest-cni-277907 kubelet[728]: I1003 19:42:06.585395     728 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-277907"
	Oct 03 19:42:06 newest-cni-277907 kubelet[728]: E1003 19:42:06.634451     728 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-277907\" already exists" pod="kube-system/kube-apiserver-newest-cni-277907"
	Oct 03 19:42:06 newest-cni-277907 kubelet[728]: I1003 19:42:06.634486     728 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-277907"
	Oct 03 19:42:06 newest-cni-277907 kubelet[728]: E1003 19:42:06.680911     728 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-277907\" already exists" pod="kube-system/kube-controller-manager-newest-cni-277907"
	Oct 03 19:42:06 newest-cni-277907 kubelet[728]: I1003 19:42:06.680945     728 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-277907"
	Oct 03 19:42:06 newest-cni-277907 kubelet[728]: E1003 19:42:06.704504     728 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-277907\" already exists" pod="kube-system/kube-scheduler-newest-cni-277907"
	Oct 03 19:42:06 newest-cni-277907 kubelet[728]: I1003 19:42:06.704540     728 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-277907"
	Oct 03 19:42:06 newest-cni-277907 kubelet[728]: E1003 19:42:06.729886     728 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-277907\" already exists" pod="kube-system/etcd-newest-cni-277907"
	Oct 03 19:42:07 newest-cni-277907 kubelet[728]: I1003 19:42:07.266879     728 apiserver.go:52] "Watching apiserver"
	Oct 03 19:42:07 newest-cni-277907 kubelet[728]: I1003 19:42:07.285355     728 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 03 19:42:07 newest-cni-277907 kubelet[728]: I1003 19:42:07.335720     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3e843f2f-9e62-4da8-a413-b23a4e8c33ef-xtables-lock\") pod \"kube-proxy-2ss46\" (UID: \"3e843f2f-9e62-4da8-a413-b23a4e8c33ef\") " pod="kube-system/kube-proxy-2ss46"
	Oct 03 19:42:07 newest-cni-277907 kubelet[728]: I1003 19:42:07.335946     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/efbd6505-dbd9-4229-9f30-5de99ce9258e-cni-cfg\") pod \"kindnet-b6wxk\" (UID: \"efbd6505-dbd9-4229-9f30-5de99ce9258e\") " pod="kube-system/kindnet-b6wxk"
	Oct 03 19:42:07 newest-cni-277907 kubelet[728]: I1003 19:42:07.336060     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/efbd6505-dbd9-4229-9f30-5de99ce9258e-xtables-lock\") pod \"kindnet-b6wxk\" (UID: \"efbd6505-dbd9-4229-9f30-5de99ce9258e\") " pod="kube-system/kindnet-b6wxk"
	Oct 03 19:42:07 newest-cni-277907 kubelet[728]: I1003 19:42:07.336157     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/efbd6505-dbd9-4229-9f30-5de99ce9258e-lib-modules\") pod \"kindnet-b6wxk\" (UID: \"efbd6505-dbd9-4229-9f30-5de99ce9258e\") " pod="kube-system/kindnet-b6wxk"
	Oct 03 19:42:07 newest-cni-277907 kubelet[728]: I1003 19:42:07.336256     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e843f2f-9e62-4da8-a413-b23a4e8c33ef-lib-modules\") pod \"kube-proxy-2ss46\" (UID: \"3e843f2f-9e62-4da8-a413-b23a4e8c33ef\") " pod="kube-system/kube-proxy-2ss46"
	Oct 03 19:42:07 newest-cni-277907 kubelet[728]: I1003 19:42:07.372131     728 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 03 19:42:07 newest-cni-277907 kubelet[728]: W1003 19:42:07.603186     728 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/8b59090431046f9d951b48ace59a9091019f835007d577cd4555f6908daa6561/crio-b533272fbef9f4ef6ed9587f60d1578d56c725838904b1e44f52a8a47d9678d5 WatchSource:0}: Error finding container b533272fbef9f4ef6ed9587f60d1578d56c725838904b1e44f52a8a47d9678d5: Status 404 returned error can't find the container with id b533272fbef9f4ef6ed9587f60d1578d56c725838904b1e44f52a8a47d9678d5
	Oct 03 19:42:07 newest-cni-277907 kubelet[728]: W1003 19:42:07.614246     728 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/8b59090431046f9d951b48ace59a9091019f835007d577cd4555f6908daa6561/crio-71d20ad61a98aed1ed611bf5682d771a6aa665e8c02bdaf3e4dbf56b9d943263 WatchSource:0}: Error finding container 71d20ad61a98aed1ed611bf5682d771a6aa665e8c02bdaf3e4dbf56b9d943263: Status 404 returned error can't find the container with id 71d20ad61a98aed1ed611bf5682d771a6aa665e8c02bdaf3e4dbf56b9d943263
	Oct 03 19:42:09 newest-cni-277907 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 03 19:42:09 newest-cni-277907 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 03 19:42:09 newest-cni-277907 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-277907 -n newest-cni-277907
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-277907 -n newest-cni-277907: exit status 2 (367.769493ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-277907 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-qvbbr storage-provisioner dashboard-metrics-scraper-6ffb444bf9-fzg6v kubernetes-dashboard-855c9754f9-v76lx
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-277907 describe pod coredns-66bc5c9577-qvbbr storage-provisioner dashboard-metrics-scraper-6ffb444bf9-fzg6v kubernetes-dashboard-855c9754f9-v76lx
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-277907 describe pod coredns-66bc5c9577-qvbbr storage-provisioner dashboard-metrics-scraper-6ffb444bf9-fzg6v kubernetes-dashboard-855c9754f9-v76lx: exit status 1 (100.011265ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-qvbbr" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-fzg6v" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-v76lx" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-277907 describe pod coredns-66bc5c9577-qvbbr storage-provisioner dashboard-metrics-scraper-6ffb444bf9-fzg6v kubernetes-dashboard-855c9754f9-v76lx: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-277907
helpers_test.go:243: (dbg) docker inspect newest-cni-277907:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8b59090431046f9d951b48ace59a9091019f835007d577cd4555f6908daa6561",
	        "Created": "2025-10-03T19:41:08.107758945Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 496473,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-03T19:41:54.432915553Z",
	            "FinishedAt": "2025-10-03T19:41:53.555563502Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/8b59090431046f9d951b48ace59a9091019f835007d577cd4555f6908daa6561/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8b59090431046f9d951b48ace59a9091019f835007d577cd4555f6908daa6561/hostname",
	        "HostsPath": "/var/lib/docker/containers/8b59090431046f9d951b48ace59a9091019f835007d577cd4555f6908daa6561/hosts",
	        "LogPath": "/var/lib/docker/containers/8b59090431046f9d951b48ace59a9091019f835007d577cd4555f6908daa6561/8b59090431046f9d951b48ace59a9091019f835007d577cd4555f6908daa6561-json.log",
	        "Name": "/newest-cni-277907",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-277907:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-277907",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8b59090431046f9d951b48ace59a9091019f835007d577cd4555f6908daa6561",
	                "LowerDir": "/var/lib/docker/overlay2/b6fb5b9dd131113b1ef3ef7a8465607ff85135a48ebecb8c77db75dd388bdc0a-init/diff:/var/lib/docker/overlay2/87b205803817b0b71a214d995ab7e10a92033bbf72d76d6e052f1d21ccecb313/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b6fb5b9dd131113b1ef3ef7a8465607ff85135a48ebecb8c77db75dd388bdc0a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b6fb5b9dd131113b1ef3ef7a8465607ff85135a48ebecb8c77db75dd388bdc0a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b6fb5b9dd131113b1ef3ef7a8465607ff85135a48ebecb8c77db75dd388bdc0a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-277907",
	                "Source": "/var/lib/docker/volumes/newest-cni-277907/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-277907",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-277907",
	                "name.minikube.sigs.k8s.io": "newest-cni-277907",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d003088b4efd9ccaec258d708e03119b245ed08d127fc940136f314794fee932",
	            "SandboxKey": "/var/run/docker/netns/d003088b4efd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33463"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33464"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33467"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33465"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33466"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-277907": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1a:b9:d5:55:df:6b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2fb89ec9f4b0d949e37566d491ff7c9e7ec5488e3271757158a55861f4d56349",
	                    "EndpointID": "ba9d77969dba2ed6ca106fed70744bafda1de7e24a2dcf561a6a7c9ec93aed7c",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-277907",
	                        "8b5909043104"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-277907 -n newest-cni-277907
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-277907 -n newest-cni-277907: exit status 2 (346.628969ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-277907 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-277907 logs -n 25: (1.093659641s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ image   │ no-preload-643397 image list --format=json                                                                                                                                                                                                    │ no-preload-643397            │ jenkins │ v1.37.0 │ 03 Oct 25 19:39 UTC │ 03 Oct 25 19:39 UTC │
	│ pause   │ -p no-preload-643397 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-643397            │ jenkins │ v1.37.0 │ 03 Oct 25 19:39 UTC │                     │
	│ delete  │ -p no-preload-643397                                                                                                                                                                                                                          │ no-preload-643397            │ jenkins │ v1.37.0 │ 03 Oct 25 19:39 UTC │ 03 Oct 25 19:39 UTC │
	│ delete  │ -p no-preload-643397                                                                                                                                                                                                                          │ no-preload-643397            │ jenkins │ v1.37.0 │ 03 Oct 25 19:39 UTC │ 03 Oct 25 19:39 UTC │
	│ delete  │ -p disable-driver-mounts-839513                                                                                                                                                                                                               │ disable-driver-mounts-839513 │ jenkins │ v1.37.0 │ 03 Oct 25 19:39 UTC │ 03 Oct 25 19:39 UTC │
	│ start   │ -p default-k8s-diff-port-842797 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-842797 │ jenkins │ v1.37.0 │ 03 Oct 25 19:39 UTC │ 03 Oct 25 19:40 UTC │
	│ addons  │ enable metrics-server -p embed-certs-327416 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-327416           │ jenkins │ v1.37.0 │ 03 Oct 25 19:39 UTC │                     │
	│ stop    │ -p embed-certs-327416 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-327416           │ jenkins │ v1.37.0 │ 03 Oct 25 19:39 UTC │ 03 Oct 25 19:39 UTC │
	│ addons  │ enable dashboard -p embed-certs-327416 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-327416           │ jenkins │ v1.37.0 │ 03 Oct 25 19:39 UTC │ 03 Oct 25 19:39 UTC │
	│ start   │ -p embed-certs-327416 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-327416           │ jenkins │ v1.37.0 │ 03 Oct 25 19:39 UTC │ 03 Oct 25 19:40 UTC │
	│ image   │ embed-certs-327416 image list --format=json                                                                                                                                                                                                   │ embed-certs-327416           │ jenkins │ v1.37.0 │ 03 Oct 25 19:40 UTC │ 03 Oct 25 19:40 UTC │
	│ pause   │ -p embed-certs-327416 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-327416           │ jenkins │ v1.37.0 │ 03 Oct 25 19:40 UTC │                     │
	│ delete  │ -p embed-certs-327416                                                                                                                                                                                                                         │ embed-certs-327416           │ jenkins │ v1.37.0 │ 03 Oct 25 19:40 UTC │ 03 Oct 25 19:41 UTC │
	│ delete  │ -p embed-certs-327416                                                                                                                                                                                                                         │ embed-certs-327416           │ jenkins │ v1.37.0 │ 03 Oct 25 19:41 UTC │ 03 Oct 25 19:41 UTC │
	│ start   │ -p newest-cni-277907 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-277907            │ jenkins │ v1.37.0 │ 03 Oct 25 19:41 UTC │ 03 Oct 25 19:41 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-842797 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-842797 │ jenkins │ v1.37.0 │ 03 Oct 25 19:41 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-842797 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-842797 │ jenkins │ v1.37.0 │ 03 Oct 25 19:41 UTC │ 03 Oct 25 19:41 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-842797 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-842797 │ jenkins │ v1.37.0 │ 03 Oct 25 19:41 UTC │ 03 Oct 25 19:41 UTC │
	│ start   │ -p default-k8s-diff-port-842797 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-842797 │ jenkins │ v1.37.0 │ 03 Oct 25 19:41 UTC │ 03 Oct 25 19:42 UTC │
	│ addons  │ enable metrics-server -p newest-cni-277907 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-277907            │ jenkins │ v1.37.0 │ 03 Oct 25 19:41 UTC │                     │
	│ stop    │ -p newest-cni-277907 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-277907            │ jenkins │ v1.37.0 │ 03 Oct 25 19:41 UTC │ 03 Oct 25 19:41 UTC │
	│ addons  │ enable dashboard -p newest-cni-277907 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-277907            │ jenkins │ v1.37.0 │ 03 Oct 25 19:41 UTC │ 03 Oct 25 19:41 UTC │
	│ start   │ -p newest-cni-277907 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-277907            │ jenkins │ v1.37.0 │ 03 Oct 25 19:41 UTC │ 03 Oct 25 19:42 UTC │
	│ image   │ newest-cni-277907 image list --format=json                                                                                                                                                                                                    │ newest-cni-277907            │ jenkins │ v1.37.0 │ 03 Oct 25 19:42 UTC │ 03 Oct 25 19:42 UTC │
	│ pause   │ -p newest-cni-277907 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-277907            │ jenkins │ v1.37.0 │ 03 Oct 25 19:42 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/03 19:41:54
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 19:41:54.120965  496330 out.go:360] Setting OutFile to fd 1 ...
	I1003 19:41:54.121078  496330 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 19:41:54.121089  496330 out.go:374] Setting ErrFile to fd 2...
	I1003 19:41:54.121094  496330 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 19:41:54.121354  496330 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-284583/.minikube/bin
	I1003 19:41:54.121779  496330 out.go:368] Setting JSON to false
	I1003 19:41:54.122774  496330 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8666,"bootTime":1759511849,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1003 19:41:54.122843  496330 start.go:140] virtualization:  
	I1003 19:41:54.127960  496330 out.go:179] * [newest-cni-277907] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1003 19:41:54.131185  496330 out.go:179]   - MINIKUBE_LOCATION=21625
	I1003 19:41:54.131233  496330 notify.go:220] Checking for updates...
	I1003 19:41:54.137573  496330 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 19:41:54.140460  496330 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21625-284583/kubeconfig
	I1003 19:41:54.143316  496330 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-284583/.minikube
	I1003 19:41:54.146169  496330 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1003 19:41:54.149090  496330 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 19:41:54.152535  496330 config.go:182] Loaded profile config "newest-cni-277907": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 19:41:54.153202  496330 driver.go:421] Setting default libvirt URI to qemu:///system
	I1003 19:41:54.188751  496330 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1003 19:41:54.188872  496330 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 19:41:54.255033  496330 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-03 19:41:54.245853078 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1003 19:41:54.255140  496330 docker.go:318] overlay module found
	I1003 19:41:54.258263  496330 out.go:179] * Using the docker driver based on existing profile
	I1003 19:41:54.261145  496330 start.go:304] selected driver: docker
	I1003 19:41:54.261164  496330 start.go:924] validating driver "docker" against &{Name:newest-cni-277907 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-277907 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 19:41:54.261259  496330 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 19:41:54.262011  496330 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 19:41:54.317634  496330 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-03 19:41:54.308339117 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1003 19:41:54.317983  496330 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1003 19:41:54.318017  496330 cni.go:84] Creating CNI manager for ""
	I1003 19:41:54.318080  496330 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 19:41:54.318211  496330 start.go:348] cluster config:
	{Name:newest-cni-277907 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-277907 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 19:41:54.321284  496330 out.go:179] * Starting "newest-cni-277907" primary control-plane node in "newest-cni-277907" cluster
	I1003 19:41:54.324172  496330 cache.go:123] Beginning downloading kic base image for docker with crio
	I1003 19:41:54.326884  496330 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1003 19:41:54.329849  496330 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 19:41:54.329905  496330 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21625-284583/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1003 19:41:54.329918  496330 cache.go:58] Caching tarball of preloaded images
	I1003 19:41:54.329931  496330 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1003 19:41:54.329998  496330 preload.go:233] Found /home/jenkins/minikube-integration/21625-284583/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1003 19:41:54.330008  496330 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1003 19:41:54.330125  496330 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/newest-cni-277907/config.json ...
	I1003 19:41:54.364203  496330 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1003 19:41:54.364223  496330 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1003 19:41:54.364236  496330 cache.go:232] Successfully downloaded all kic artifacts
	I1003 19:41:54.364261  496330 start.go:360] acquireMachinesLock for newest-cni-277907: {Name:mkd134b602e6b475d420a69856bbf9b26bf807b4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 19:41:54.364311  496330 start.go:364] duration metric: took 33.354µs to acquireMachinesLock for "newest-cni-277907"
	I1003 19:41:54.364329  496330 start.go:96] Skipping create...Using existing machine configuration
	I1003 19:41:54.364335  496330 fix.go:54] fixHost starting: 
	I1003 19:41:54.364600  496330 cli_runner.go:164] Run: docker container inspect newest-cni-277907 --format={{.State.Status}}
	I1003 19:41:54.398823  496330 fix.go:112] recreateIfNeeded on newest-cni-277907: state=Stopped err=<nil>
	W1003 19:41:54.398859  496330 fix.go:138] unexpected machine state, will restart: <nil>
	W1003 19:41:53.665999  492927 pod_ready.go:104] pod "coredns-66bc5c9577-l8knz" is not "Ready", error: <nil>
	W1003 19:41:56.165748  492927 pod_ready.go:104] pod "coredns-66bc5c9577-l8knz" is not "Ready", error: <nil>
	W1003 19:41:58.165851  492927 pod_ready.go:104] pod "coredns-66bc5c9577-l8knz" is not "Ready", error: <nil>
	I1003 19:41:54.401832  496330 out.go:252] * Restarting existing docker container for "newest-cni-277907" ...
	I1003 19:41:54.401926  496330 cli_runner.go:164] Run: docker start newest-cni-277907
	I1003 19:41:54.665793  496330 cli_runner.go:164] Run: docker container inspect newest-cni-277907 --format={{.State.Status}}
	I1003 19:41:54.686555  496330 kic.go:430] container "newest-cni-277907" state is running.
	I1003 19:41:54.686942  496330 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-277907
	I1003 19:41:54.706657  496330 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/newest-cni-277907/config.json ...
	I1003 19:41:54.706894  496330 machine.go:93] provisionDockerMachine start ...
	I1003 19:41:54.706959  496330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-277907
	I1003 19:41:54.728116  496330 main.go:141] libmachine: Using SSH client type: native
	I1003 19:41:54.728468  496330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1003 19:41:54.728481  496330 main.go:141] libmachine: About to run SSH command:
	hostname
	I1003 19:41:54.731067  496330 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43580->127.0.0.1:33463: read: connection reset by peer
	I1003 19:41:57.868675  496330 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-277907
	
	I1003 19:41:57.868707  496330 ubuntu.go:182] provisioning hostname "newest-cni-277907"
	I1003 19:41:57.868848  496330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-277907
	I1003 19:41:57.889499  496330 main.go:141] libmachine: Using SSH client type: native
	I1003 19:41:57.889827  496330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1003 19:41:57.889853  496330 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-277907 && echo "newest-cni-277907" | sudo tee /etc/hostname
	I1003 19:41:58.032204  496330 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-277907
	
	I1003 19:41:58.032281  496330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-277907
	I1003 19:41:58.051112  496330 main.go:141] libmachine: Using SSH client type: native
	I1003 19:41:58.051408  496330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1003 19:41:58.051425  496330 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-277907' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-277907/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-277907' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 19:41:58.184815  496330 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 19:41:58.184839  496330 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21625-284583/.minikube CaCertPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21625-284583/.minikube}
	I1003 19:41:58.184869  496330 ubuntu.go:190] setting up certificates
	I1003 19:41:58.184886  496330 provision.go:84] configureAuth start
	I1003 19:41:58.184956  496330 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-277907
	I1003 19:41:58.203787  496330 provision.go:143] copyHostCerts
	I1003 19:41:58.203866  496330 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-284583/.minikube/ca.pem, removing ...
	I1003 19:41:58.203890  496330 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-284583/.minikube/ca.pem
	I1003 19:41:58.203975  496330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21625-284583/.minikube/ca.pem (1082 bytes)
	I1003 19:41:58.204074  496330 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-284583/.minikube/cert.pem, removing ...
	I1003 19:41:58.204085  496330 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-284583/.minikube/cert.pem
	I1003 19:41:58.204114  496330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21625-284583/.minikube/cert.pem (1123 bytes)
	I1003 19:41:58.204172  496330 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-284583/.minikube/key.pem, removing ...
	I1003 19:41:58.204187  496330 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-284583/.minikube/key.pem
	I1003 19:41:58.204211  496330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21625-284583/.minikube/key.pem (1675 bytes)
	I1003 19:41:58.204274  496330 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21625-284583/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca-key.pem org=jenkins.newest-cni-277907 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-277907]
	I1003 19:41:58.423390  496330 provision.go:177] copyRemoteCerts
	I1003 19:41:58.423459  496330 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1003 19:41:58.423510  496330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-277907
	I1003 19:41:58.441519  496330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/newest-cni-277907/id_rsa Username:docker}
	I1003 19:41:58.537115  496330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1003 19:41:58.556082  496330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1003 19:41:58.574450  496330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1003 19:41:58.596238  496330 provision.go:87] duration metric: took 411.324935ms to configureAuth
	I1003 19:41:58.596308  496330 ubuntu.go:206] setting minikube options for container-runtime
	I1003 19:41:58.596516  496330 config.go:182] Loaded profile config "newest-cni-277907": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 19:41:58.596643  496330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-277907
	I1003 19:41:58.615214  496330 main.go:141] libmachine: Using SSH client type: native
	I1003 19:41:58.615545  496330 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1003 19:41:58.615565  496330 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1003 19:41:58.896907  496330 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1003 19:41:58.896932  496330 machine.go:96] duration metric: took 4.190024679s to provisionDockerMachine
	I1003 19:41:58.896943  496330 start.go:293] postStartSetup for "newest-cni-277907" (driver="docker")
	I1003 19:41:58.896954  496330 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1003 19:41:58.897017  496330 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1003 19:41:58.897064  496330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-277907
	I1003 19:41:58.920025  496330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/newest-cni-277907/id_rsa Username:docker}
	I1003 19:41:59.016941  496330 ssh_runner.go:195] Run: cat /etc/os-release
	I1003 19:41:59.020542  496330 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1003 19:41:59.020572  496330 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1003 19:41:59.020584  496330 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-284583/.minikube/addons for local assets ...
	I1003 19:41:59.020639  496330 filesync.go:126] Scanning /home/jenkins/minikube-integration/21625-284583/.minikube/files for local assets ...
	I1003 19:41:59.020720  496330 filesync.go:149] local asset: /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem -> 2864342.pem in /etc/ssl/certs
	I1003 19:41:59.020854  496330 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1003 19:41:59.028219  496330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem --> /etc/ssl/certs/2864342.pem (1708 bytes)
	I1003 19:41:59.046931  496330 start.go:296] duration metric: took 149.973159ms for postStartSetup
	I1003 19:41:59.047037  496330 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 19:41:59.047082  496330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-277907
	I1003 19:41:59.065427  496330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/newest-cni-277907/id_rsa Username:docker}
	I1003 19:41:59.170122  496330 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1003 19:41:59.174964  496330 fix.go:56] duration metric: took 4.810622307s for fixHost
	I1003 19:41:59.174997  496330 start.go:83] releasing machines lock for "newest-cni-277907", held for 4.810670061s
	I1003 19:41:59.175068  496330 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-277907
	I1003 19:41:59.192041  496330 ssh_runner.go:195] Run: cat /version.json
	I1003 19:41:59.192077  496330 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1003 19:41:59.192091  496330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-277907
	I1003 19:41:59.192130  496330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-277907
	I1003 19:41:59.210688  496330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/newest-cni-277907/id_rsa Username:docker}
	I1003 19:41:59.222576  496330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/newest-cni-277907/id_rsa Username:docker}
	I1003 19:41:59.308358  496330 ssh_runner.go:195] Run: systemctl --version
	I1003 19:41:59.406435  496330 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1003 19:41:59.454720  496330 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1003 19:41:59.459168  496330 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1003 19:41:59.459311  496330 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1003 19:41:59.467589  496330 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1003 19:41:59.467665  496330 start.go:495] detecting cgroup driver to use...
	I1003 19:41:59.467723  496330 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1003 19:41:59.467802  496330 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1003 19:41:59.483706  496330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1003 19:41:59.496702  496330 docker.go:218] disabling cri-docker service (if available) ...
	I1003 19:41:59.496803  496330 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1003 19:41:59.513232  496330 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1003 19:41:59.526862  496330 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1003 19:41:59.638847  496330 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1003 19:41:59.750268  496330 docker.go:234] disabling docker service ...
	I1003 19:41:59.750367  496330 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1003 19:41:59.767170  496330 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1003 19:41:59.780683  496330 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1003 19:41:59.902582  496330 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1003 19:42:00.025665  496330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1003 19:42:00.086131  496330 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1003 19:42:00.145494  496330 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1003 19:42:00.145678  496330 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:42:00.164046  496330 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1003 19:42:00.164124  496330 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:42:00.180462  496330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:42:00.227934  496330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:42:00.275054  496330 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1003 19:42:00.300899  496330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:42:00.333134  496330 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:42:00.357490  496330 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1003 19:42:00.376363  496330 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1003 19:42:00.387152  496330 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1003 19:42:00.397267  496330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 19:42:00.542772  496330 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1003 19:42:00.681682  496330 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1003 19:42:00.681804  496330 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1003 19:42:00.685936  496330 start.go:563] Will wait 60s for crictl version
	I1003 19:42:00.686043  496330 ssh_runner.go:195] Run: which crictl
	I1003 19:42:00.689721  496330 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1003 19:42:00.721913  496330 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1003 19:42:00.722107  496330 ssh_runner.go:195] Run: crio --version
	I1003 19:42:00.756621  496330 ssh_runner.go:195] Run: crio --version
	I1003 19:42:00.790729  496330 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1003 19:42:00.793891  496330 cli_runner.go:164] Run: docker network inspect newest-cni-277907 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 19:42:00.810171  496330 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1003 19:42:00.813899  496330 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 19:42:00.826760  496330 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1003 19:42:00.829539  496330 kubeadm.go:883] updating cluster {Name:newest-cni-277907 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-277907 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1003 19:42:00.829687  496330 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 19:42:00.829779  496330 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 19:42:00.863741  496330 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 19:42:00.863765  496330 crio.go:433] Images already preloaded, skipping extraction
	I1003 19:42:00.863825  496330 ssh_runner.go:195] Run: sudo crictl images --output json
	I1003 19:42:00.889508  496330 crio.go:514] all images are preloaded for cri-o runtime.
	I1003 19:42:00.889534  496330 cache_images.go:85] Images are preloaded, skipping loading
	I1003 19:42:00.889542  496330 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1003 19:42:00.889650  496330 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-277907 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-277907 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1003 19:42:00.889738  496330 ssh_runner.go:195] Run: crio config
	I1003 19:42:00.949329  496330 cni.go:84] Creating CNI manager for ""
	I1003 19:42:00.949353  496330 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 19:42:00.949372  496330 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I1003 19:42:00.949395  496330 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-277907 NodeName:newest-cni-277907 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1003 19:42:00.949524  496330 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-277907"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1003 19:42:00.949599  496330 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1003 19:42:00.958374  496330 binaries.go:44] Found k8s binaries, skipping transfer
	I1003 19:42:00.958526  496330 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1003 19:42:00.966282  496330 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1003 19:42:00.979145  496330 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1003 19:42:00.992933  496330 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1003 19:42:01.008865  496330 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1003 19:42:01.012924  496330 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1003 19:42:01.023121  496330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 19:42:01.143635  496330 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 19:42:01.169243  496330 certs.go:69] Setting up /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/newest-cni-277907 for IP: 192.168.85.2
	I1003 19:42:01.169262  496330 certs.go:195] generating shared ca certs ...
	I1003 19:42:01.169278  496330 certs.go:227] acquiring lock for ca certs: {Name:mk5a10e6c921326e9c211447576eaeb893259ba7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:42:01.169433  496330 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21625-284583/.minikube/ca.key
	I1003 19:42:01.169477  496330 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.key
	I1003 19:42:01.169485  496330 certs.go:257] generating profile certs ...
	I1003 19:42:01.169578  496330 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/newest-cni-277907/client.key
	I1003 19:42:01.169670  496330 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/newest-cni-277907/apiserver.key.e8e82bd7
	I1003 19:42:01.169719  496330 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/newest-cni-277907/proxy-client.key
	I1003 19:42:01.169843  496330 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/286434.pem (1338 bytes)
	W1003 19:42:01.169873  496330 certs.go:480] ignoring /home/jenkins/minikube-integration/21625-284583/.minikube/certs/286434_empty.pem, impossibly tiny 0 bytes
	I1003 19:42:01.169880  496330 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca-key.pem (1679 bytes)
	I1003 19:42:01.169909  496330 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem (1082 bytes)
	I1003 19:42:01.169932  496330 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem (1123 bytes)
	I1003 19:42:01.169954  496330 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/key.pem (1675 bytes)
	I1003 19:42:01.170005  496330 certs.go:484] found cert: /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem (1708 bytes)
	I1003 19:42:01.170652  496330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1003 19:42:01.193794  496330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1003 19:42:01.216859  496330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1003 19:42:01.253269  496330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1003 19:42:01.277760  496330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/newest-cni-277907/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1003 19:42:01.304532  496330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/newest-cni-277907/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1003 19:42:01.334477  496330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/newest-cni-277907/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1003 19:42:01.362611  496330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/newest-cni-277907/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1003 19:42:01.386292  496330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/certs/286434.pem --> /usr/share/ca-certificates/286434.pem (1338 bytes)
	I1003 19:42:01.405475  496330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/ssl/certs/2864342.pem --> /usr/share/ca-certificates/2864342.pem (1708 bytes)
	I1003 19:42:01.425513  496330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21625-284583/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1003 19:42:01.445188  496330 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1003 19:42:01.459720  496330 ssh_runner.go:195] Run: openssl version
	I1003 19:42:01.466204  496330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1003 19:42:01.477623  496330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1003 19:42:01.482378  496330 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  3 18:27 /usr/share/ca-certificates/minikubeCA.pem
	I1003 19:42:01.482446  496330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1003 19:42:01.524975  496330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1003 19:42:01.532964  496330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/286434.pem && ln -fs /usr/share/ca-certificates/286434.pem /etc/ssl/certs/286434.pem"
	I1003 19:42:01.541283  496330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/286434.pem
	I1003 19:42:01.545398  496330 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct  3 18:34 /usr/share/ca-certificates/286434.pem
	I1003 19:42:01.545503  496330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/286434.pem
	I1003 19:42:01.586675  496330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/286434.pem /etc/ssl/certs/51391683.0"
	I1003 19:42:01.595032  496330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2864342.pem && ln -fs /usr/share/ca-certificates/2864342.pem /etc/ssl/certs/2864342.pem"
	I1003 19:42:01.603895  496330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2864342.pem
	I1003 19:42:01.608059  496330 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct  3 18:34 /usr/share/ca-certificates/2864342.pem
	I1003 19:42:01.608127  496330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2864342.pem
	I1003 19:42:01.650769  496330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2864342.pem /etc/ssl/certs/3ec20f2e.0"
	I1003 19:42:01.659797  496330 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1003 19:42:01.665334  496330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1003 19:42:01.707667  496330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1003 19:42:01.749291  496330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1003 19:42:01.791218  496330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1003 19:42:01.843750  496330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1003 19:42:01.893452  496330 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1003 19:42:01.989796  496330 kubeadm.go:400] StartCluster: {Name:newest-cni-277907 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-277907 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 19:42:01.989946  496330 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1003 19:42:01.990056  496330 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1003 19:42:02.053220  496330 cri.go:89] found id: "e013c184e6e3ac3b12ebb1e788f88a522df87d865c2cded32ce1ba2140687d59"
	I1003 19:42:02.053298  496330 cri.go:89] found id: "ef5fea601208f50b53f6eef5d5284a014ca62a5cdc7ba7676e680d130cb543cb"
	I1003 19:42:02.053319  496330 cri.go:89] found id: "d54346ccf42f503b43643a2a4f2797f3f6219e7ebb4f15de4620be40f934e579"
	I1003 19:42:02.053338  496330 cri.go:89] found id: "19786ebd68db6b6c5bd023f0384178b772b9a909a9ca5278f768374892e103d8"
	I1003 19:42:02.053370  496330 cri.go:89] found id: ""
	I1003 19:42:02.053465  496330 ssh_runner.go:195] Run: sudo runc list -f json
	W1003 19:42:02.077433  496330 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-03T19:42:02Z" level=error msg="open /run/runc: no such file or directory"
	I1003 19:42:02.077523  496330 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1003 19:42:02.095110  496330 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1003 19:42:02.095141  496330 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1003 19:42:02.095193  496330 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1003 19:42:02.106853  496330 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1003 19:42:02.107444  496330 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-277907" does not appear in /home/jenkins/minikube-integration/21625-284583/kubeconfig
	I1003 19:42:02.107712  496330 kubeconfig.go:62] /home/jenkins/minikube-integration/21625-284583/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-277907" cluster setting kubeconfig missing "newest-cni-277907" context setting]
	I1003 19:42:02.108230  496330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/kubeconfig: {Name:mkc1323fd87f4a78231a26d2dab0dff7feecf1e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:42:02.109732  496330 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1003 19:42:02.134182  496330 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1003 19:42:02.134213  496330 kubeadm.go:601] duration metric: took 39.065513ms to restartPrimaryControlPlane
	I1003 19:42:02.134223  496330 kubeadm.go:402] duration metric: took 144.434833ms to StartCluster
	I1003 19:42:02.134237  496330 settings.go:142] acquiring lock: {Name:mkc95577dbc448e3409dfa2b5e53a3a1327cb451 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:42:02.134309  496330 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21625-284583/kubeconfig
	I1003 19:42:02.135258  496330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/kubeconfig: {Name:mkc1323fd87f4a78231a26d2dab0dff7feecf1e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:42:02.135496  496330 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1003 19:42:02.135709  496330 config.go:182] Loaded profile config "newest-cni-277907": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 19:42:02.135846  496330 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1003 19:42:02.135920  496330 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-277907"
	I1003 19:42:02.135941  496330 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-277907"
	W1003 19:42:02.135950  496330 addons.go:247] addon storage-provisioner should already be in state true
	I1003 19:42:02.135973  496330 host.go:66] Checking if "newest-cni-277907" exists ...
	I1003 19:42:02.136029  496330 addons.go:69] Setting dashboard=true in profile "newest-cni-277907"
	I1003 19:42:02.136079  496330 addons.go:238] Setting addon dashboard=true in "newest-cni-277907"
	W1003 19:42:02.136100  496330 addons.go:247] addon dashboard should already be in state true
	I1003 19:42:02.136151  496330 host.go:66] Checking if "newest-cni-277907" exists ...
	I1003 19:42:02.136820  496330 cli_runner.go:164] Run: docker container inspect newest-cni-277907 --format={{.State.Status}}
	I1003 19:42:02.137128  496330 cli_runner.go:164] Run: docker container inspect newest-cni-277907 --format={{.State.Status}}
	I1003 19:42:02.138228  496330 addons.go:69] Setting default-storageclass=true in profile "newest-cni-277907"
	I1003 19:42:02.138257  496330 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-277907"
	I1003 19:42:02.138655  496330 cli_runner.go:164] Run: docker container inspect newest-cni-277907 --format={{.State.Status}}
	I1003 19:42:02.146641  496330 out.go:179] * Verifying Kubernetes components...
	I1003 19:42:02.149848  496330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1003 19:42:02.206184  496330 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1003 19:42:02.210108  496330 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1003 19:42:02.213723  496330 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1003 19:42:00.171338  492927 pod_ready.go:104] pod "coredns-66bc5c9577-l8knz" is not "Ready", error: <nil>
	W1003 19:42:02.174293  492927 pod_ready.go:104] pod "coredns-66bc5c9577-l8knz" is not "Ready", error: <nil>
	I1003 19:42:02.213725  496330 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1003 19:42:02.213834  496330 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1003 19:42:02.213913  496330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-277907
	I1003 19:42:02.216886  496330 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 19:42:02.216911  496330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1003 19:42:02.216976  496330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-277907
	I1003 19:42:02.237714  496330 addons.go:238] Setting addon default-storageclass=true in "newest-cni-277907"
	W1003 19:42:02.237740  496330 addons.go:247] addon default-storageclass should already be in state true
	I1003 19:42:02.237765  496330 host.go:66] Checking if "newest-cni-277907" exists ...
	I1003 19:42:02.238177  496330 cli_runner.go:164] Run: docker container inspect newest-cni-277907 --format={{.State.Status}}
	I1003 19:42:02.265005  496330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/newest-cni-277907/id_rsa Username:docker}
	I1003 19:42:02.273019  496330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/newest-cni-277907/id_rsa Username:docker}
	I1003 19:42:02.288819  496330 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1003 19:42:02.288841  496330 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1003 19:42:02.288902  496330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-277907
	I1003 19:42:02.318259  496330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/newest-cni-277907/id_rsa Username:docker}
	I1003 19:42:02.530550  496330 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1003 19:42:02.541602  496330 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1003 19:42:02.541679  496330 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1003 19:42:02.557502  496330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1003 19:42:02.579345  496330 api_server.go:52] waiting for apiserver process to appear ...
	I1003 19:42:02.579417  496330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 19:42:02.597846  496330 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1003 19:42:02.597871  496330 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1003 19:42:02.613301  496330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1003 19:42:02.648622  496330 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1003 19:42:02.648648  496330 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1003 19:42:02.731767  496330 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1003 19:42:02.731793  496330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1003 19:42:02.812627  496330 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1003 19:42:02.812651  496330 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1003 19:42:02.841688  496330 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1003 19:42:02.841734  496330 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1003 19:42:02.870058  496330 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1003 19:42:02.870083  496330 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1003 19:42:02.925263  496330 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1003 19:42:02.925287  496330 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1003 19:42:02.952871  496330 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1003 19:42:02.952897  496330 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1003 19:42:02.976325  496330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1003 19:42:07.745046  496330 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (5.165599909s)
	I1003 19:42:07.745078  496330 api_server.go:72] duration metric: took 5.609559819s to wait for apiserver process to appear ...
	I1003 19:42:07.745085  496330 api_server.go:88] waiting for apiserver healthz status ...
	I1003 19:42:07.745102  496330 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1003 19:42:07.745406  496330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.13207849s)
	I1003 19:42:07.746625  496330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.189054454s)
	I1003 19:42:07.776204  496330 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1003 19:42:07.776233  496330 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1003 19:42:07.810390  496330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.83401929s)
	I1003 19:42:07.813770  496330 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-277907 addons enable metrics-server
	
	I1003 19:42:07.816441  496330 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	W1003 19:42:04.666086  492927 pod_ready.go:104] pod "coredns-66bc5c9577-l8knz" is not "Ready", error: <nil>
	W1003 19:42:07.169781  492927 pod_ready.go:104] pod "coredns-66bc5c9577-l8knz" is not "Ready", error: <nil>
	I1003 19:42:07.819559  496330 addons.go:514] duration metric: took 5.683791267s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1003 19:42:08.245394  496330 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1003 19:42:08.258204  496330 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1003 19:42:08.259339  496330 api_server.go:141] control plane version: v1.34.1
	I1003 19:42:08.259408  496330 api_server.go:131] duration metric: took 514.315619ms to wait for apiserver health ...
	I1003 19:42:08.259433  496330 system_pods.go:43] waiting for kube-system pods to appear ...
	I1003 19:42:08.265595  496330 system_pods.go:59] 8 kube-system pods found
	I1003 19:42:08.265687  496330 system_pods.go:61] "coredns-66bc5c9577-qvbbr" [1cd277df-18e2-4280-aed7-5f55acbafa2e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1003 19:42:08.265713  496330 system_pods.go:61] "etcd-newest-cni-277907" [9a388045-313d-4a5e-a56a-c070a23d10f0] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1003 19:42:08.265750  496330 system_pods.go:61] "kindnet-b6wxk" [efbd6505-dbd9-4229-9f30-5de99ce9258e] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1003 19:42:08.265779  496330 system_pods.go:61] "kube-apiserver-newest-cni-277907" [e333974e-7706-4dd3-a108-96d50d755815] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1003 19:42:08.265799  496330 system_pods.go:61] "kube-controller-manager-newest-cni-277907" [ca367ef6-21e7-49f2-bb9e-a73465e96941] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1003 19:42:08.265836  496330 system_pods.go:61] "kube-proxy-2ss46" [3e843f2f-9e62-4da8-a413-b23a4e8c33ef] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1003 19:42:08.265863  496330 system_pods.go:61] "kube-scheduler-newest-cni-277907" [7d578ea2-dbb0-4886-96d7-ed212ff4907a] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1003 19:42:08.265884  496330 system_pods.go:61] "storage-provisioner" [da0d0bff-83e0-4502-b45b-5becfa549ef9] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1003 19:42:08.265918  496330 system_pods.go:74] duration metric: took 6.465857ms to wait for pod list to return data ...
	I1003 19:42:08.265946  496330 default_sa.go:34] waiting for default service account to be created ...
	I1003 19:42:08.269366  496330 default_sa.go:45] found service account: "default"
	I1003 19:42:08.269434  496330 default_sa.go:55] duration metric: took 3.458213ms for default service account to be created ...
	I1003 19:42:08.269460  496330 kubeadm.go:586] duration metric: took 6.133940613s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1003 19:42:08.269505  496330 node_conditions.go:102] verifying NodePressure condition ...
	I1003 19:42:08.272314  496330 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1003 19:42:08.272395  496330 node_conditions.go:123] node cpu capacity is 2
	I1003 19:42:08.272423  496330 node_conditions.go:105] duration metric: took 2.89488ms to run NodePressure ...
	I1003 19:42:08.272450  496330 start.go:241] waiting for startup goroutines ...
	I1003 19:42:08.272483  496330 start.go:246] waiting for cluster config update ...
	I1003 19:42:08.272511  496330 start.go:255] writing updated cluster config ...
	I1003 19:42:08.272895  496330 ssh_runner.go:195] Run: rm -f paused
	I1003 19:42:08.346481  496330 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1003 19:42:08.351677  496330 out.go:179] * Done! kubectl is now configured to use "newest-cni-277907" cluster and "default" namespace by default
	W1003 19:42:09.665900  492927 pod_ready.go:104] pod "coredns-66bc5c9577-l8knz" is not "Ready", error: <nil>
	I1003 19:42:10.182367  492927 pod_ready.go:94] pod "coredns-66bc5c9577-l8knz" is "Ready"
	I1003 19:42:10.182396  492927 pod_ready.go:86] duration metric: took 32.522452921s for pod "coredns-66bc5c9577-l8knz" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:42:10.190311  492927 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-842797" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:42:10.198116  492927 pod_ready.go:94] pod "etcd-default-k8s-diff-port-842797" is "Ready"
	I1003 19:42:10.198142  492927 pod_ready.go:86] duration metric: took 7.802707ms for pod "etcd-default-k8s-diff-port-842797" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:42:10.202592  492927 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-842797" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:42:10.209160  492927 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-842797" is "Ready"
	I1003 19:42:10.209186  492927 pod_ready.go:86] duration metric: took 6.567233ms for pod "kube-apiserver-default-k8s-diff-port-842797" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:42:10.213271  492927 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-842797" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:42:10.363451  492927 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-842797" is "Ready"
	I1003 19:42:10.363649  492927 pod_ready.go:86] duration metric: took 150.347798ms for pod "kube-controller-manager-default-k8s-diff-port-842797" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:42:10.564287  492927 pod_ready.go:83] waiting for pod "kube-proxy-gvslj" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:42:10.963683  492927 pod_ready.go:94] pod "kube-proxy-gvslj" is "Ready"
	I1003 19:42:10.963752  492927 pod_ready.go:86] duration metric: took 399.435374ms for pod "kube-proxy-gvslj" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:42:11.164167  492927 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-842797" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:42:11.564966  492927 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-842797" is "Ready"
	I1003 19:42:11.564991  492927 pod_ready.go:86] duration metric: took 400.7524ms for pod "kube-scheduler-default-k8s-diff-port-842797" in "kube-system" namespace to be "Ready" or be gone ...
	I1003 19:42:11.565009  492927 pod_ready.go:40] duration metric: took 33.956882521s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1003 19:42:11.647394  492927 start.go:623] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1003 19:42:11.650572  492927 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-842797" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 03 19:42:07 newest-cni-277907 crio[611]: time="2025-10-03T19:42:07.583302276Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 19:42:07 newest-cni-277907 crio[611]: time="2025-10-03T19:42:07.586403789Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-2ss46/POD" id=cd6108ce-3776-45e1-b4b9-5849086da9e6 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 03 19:42:07 newest-cni-277907 crio[611]: time="2025-10-03T19:42:07.586472426Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 19:42:07 newest-cni-277907 crio[611]: time="2025-10-03T19:42:07.594082744Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=eac5b371-8cdb-4864-9920-d0bba20ea7be name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 03 19:42:07 newest-cni-277907 crio[611]: time="2025-10-03T19:42:07.595087305Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=cd6108ce-3776-45e1-b4b9-5849086da9e6 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 03 19:42:07 newest-cni-277907 crio[611]: time="2025-10-03T19:42:07.606732427Z" level=info msg="Ran pod sandbox b533272fbef9f4ef6ed9587f60d1578d56c725838904b1e44f52a8a47d9678d5 with infra container: kube-system/kindnet-b6wxk/POD" id=eac5b371-8cdb-4864-9920-d0bba20ea7be name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 03 19:42:07 newest-cni-277907 crio[611]: time="2025-10-03T19:42:07.60939646Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=3c2900a3-cd3b-435f-837a-dcde0fd7db94 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 19:42:07 newest-cni-277907 crio[611]: time="2025-10-03T19:42:07.614817427Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=d01da6a4-2116-4668-98a2-d9f1241c6674 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 19:42:07 newest-cni-277907 crio[611]: time="2025-10-03T19:42:07.615987751Z" level=info msg="Creating container: kube-system/kindnet-b6wxk/kindnet-cni" id=036098f8-aba0-4ebd-a38b-d4bd981e2137 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:42:07 newest-cni-277907 crio[611]: time="2025-10-03T19:42:07.616331699Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 19:42:07 newest-cni-277907 crio[611]: time="2025-10-03T19:42:07.624530392Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 19:42:07 newest-cni-277907 crio[611]: time="2025-10-03T19:42:07.635335221Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 19:42:07 newest-cni-277907 crio[611]: time="2025-10-03T19:42:07.635837104Z" level=info msg="Ran pod sandbox 71d20ad61a98aed1ed611bf5682d771a6aa665e8c02bdaf3e4dbf56b9d943263 with infra container: kube-system/kube-proxy-2ss46/POD" id=cd6108ce-3776-45e1-b4b9-5849086da9e6 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 03 19:42:07 newest-cni-277907 crio[611]: time="2025-10-03T19:42:07.652339541Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=4f15c193-fe95-4004-bea4-3b98af4e6255 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 19:42:07 newest-cni-277907 crio[611]: time="2025-10-03T19:42:07.653783297Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=b6c34717-56ae-492d-a41e-f3b782ba8285 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 19:42:07 newest-cni-277907 crio[611]: time="2025-10-03T19:42:07.654954211Z" level=info msg="Creating container: kube-system/kube-proxy-2ss46/kube-proxy" id=742cd70f-7c44-44a1-a981-9cc16118eab4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:42:07 newest-cni-277907 crio[611]: time="2025-10-03T19:42:07.655638833Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 19:42:07 newest-cni-277907 crio[611]: time="2025-10-03T19:42:07.67905154Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 19:42:07 newest-cni-277907 crio[611]: time="2025-10-03T19:42:07.679813472Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 19:42:07 newest-cni-277907 crio[611]: time="2025-10-03T19:42:07.724409928Z" level=info msg="Created container 0bb2708b8a68c6bf83e7c6ebde209424b7a34780db11c23f8c8ee479b9536089: kube-system/kindnet-b6wxk/kindnet-cni" id=036098f8-aba0-4ebd-a38b-d4bd981e2137 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:42:07 newest-cni-277907 crio[611]: time="2025-10-03T19:42:07.725354525Z" level=info msg="Starting container: 0bb2708b8a68c6bf83e7c6ebde209424b7a34780db11c23f8c8ee479b9536089" id=2b81076f-8c32-4347-a043-d9d9be39a8be name=/runtime.v1.RuntimeService/StartContainer
	Oct 03 19:42:07 newest-cni-277907 crio[611]: time="2025-10-03T19:42:07.731538113Z" level=info msg="Started container" PID=1061 containerID=0bb2708b8a68c6bf83e7c6ebde209424b7a34780db11c23f8c8ee479b9536089 description=kube-system/kindnet-b6wxk/kindnet-cni id=2b81076f-8c32-4347-a043-d9d9be39a8be name=/runtime.v1.RuntimeService/StartContainer sandboxID=b533272fbef9f4ef6ed9587f60d1578d56c725838904b1e44f52a8a47d9678d5
	Oct 03 19:42:07 newest-cni-277907 crio[611]: time="2025-10-03T19:42:07.816883039Z" level=info msg="Created container e9387afb5b6ce8d012cecb62f497dd44a46bfcfa85872e279424e14948ca19e3: kube-system/kube-proxy-2ss46/kube-proxy" id=742cd70f-7c44-44a1-a981-9cc16118eab4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:42:07 newest-cni-277907 crio[611]: time="2025-10-03T19:42:07.818075943Z" level=info msg="Starting container: e9387afb5b6ce8d012cecb62f497dd44a46bfcfa85872e279424e14948ca19e3" id=bea3cb77-b43a-4b98-9708-27642ccaca92 name=/runtime.v1.RuntimeService/StartContainer
	Oct 03 19:42:07 newest-cni-277907 crio[611]: time="2025-10-03T19:42:07.821497142Z" level=info msg="Started container" PID=1064 containerID=e9387afb5b6ce8d012cecb62f497dd44a46bfcfa85872e279424e14948ca19e3 description=kube-system/kube-proxy-2ss46/kube-proxy id=bea3cb77-b43a-4b98-9708-27642ccaca92 name=/runtime.v1.RuntimeService/StartContainer sandboxID=71d20ad61a98aed1ed611bf5682d771a6aa665e8c02bdaf3e4dbf56b9d943263
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	e9387afb5b6ce       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   6 seconds ago       Running             kube-proxy                1                   71d20ad61a98a       kube-proxy-2ss46                            kube-system
	0bb2708b8a68c       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   6 seconds ago       Running             kindnet-cni               1                   b533272fbef9f       kindnet-b6wxk                               kube-system
	e013c184e6e3a       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   12 seconds ago      Running             etcd                      1                   385cfb31fd940       etcd-newest-cni-277907                      kube-system
	ef5fea601208f       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   12 seconds ago      Running             kube-controller-manager   1                   2aec7d9732cb4       kube-controller-manager-newest-cni-277907   kube-system
	d54346ccf42f5       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   12 seconds ago      Running             kube-apiserver            1                   0687f047a497b       kube-apiserver-newest-cni-277907            kube-system
	19786ebd68db6       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   12 seconds ago      Running             kube-scheduler            1                   5e346fa738c07       kube-scheduler-newest-cni-277907            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-277907
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-277907
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a43873c79fc22f8b1ccd29d3dfa635d392b09335
	                    minikube.k8s.io/name=newest-cni-277907
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_03T19_41_41_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 03 Oct 2025 19:41:37 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-277907
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 03 Oct 2025 19:42:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 03 Oct 2025 19:42:06 +0000   Fri, 03 Oct 2025 19:41:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 03 Oct 2025 19:42:06 +0000   Fri, 03 Oct 2025 19:41:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 03 Oct 2025 19:42:06 +0000   Fri, 03 Oct 2025 19:41:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 03 Oct 2025 19:42:06 +0000   Fri, 03 Oct 2025 19:41:29 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-277907
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 aa9cc27629e84e84bad28b65f03df7b6
	  System UUID:                20e576e4-dd3f-4016-9b52-c906c3cc7f99
	  Boot ID:                    3762136e-8bec-4104-a5cb-0b1976f6048e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-277907                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         34s
	  kube-system                 kindnet-b6wxk                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-newest-cni-277907             250m (12%)    0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-controller-manager-newest-cni-277907    200m (10%)    0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-2ss46                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-newest-cni-277907             100m (5%)     0 (0%)      0 (0%)           0 (0%)         34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 27s                kube-proxy       
	  Normal   Starting                 6s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  46s (x8 over 46s)  kubelet          Node newest-cni-277907 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    46s (x8 over 46s)  kubelet          Node newest-cni-277907 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     46s (x8 over 46s)  kubelet          Node newest-cni-277907 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    34s                kubelet          Node newest-cni-277907 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 34s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  34s                kubelet          Node newest-cni-277907 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     34s                kubelet          Node newest-cni-277907 status is now: NodeHasSufficientPID
	  Normal   Starting                 34s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           30s                node-controller  Node newest-cni-277907 event: Registered Node newest-cni-277907 in Controller
	  Normal   Starting                 13s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 13s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  13s (x8 over 13s)  kubelet          Node newest-cni-277907 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13s (x8 over 13s)  kubelet          Node newest-cni-277907 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13s (x8 over 13s)  kubelet          Node newest-cni-277907 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5s                 node-controller  Node newest-cni-277907 event: Registered Node newest-cni-277907 in Controller
	
	
	==> dmesg <==
	[ +24.839009] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:13] overlayfs: idmapped layers are currently not supported
	[ +26.493253] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:15] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:16] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:17] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000010] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Oct 3 19:18] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:20] overlayfs: idmapped layers are currently not supported
	[ +32.018892] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:22] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:24] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:26] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:32] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:34] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:35] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:36] overlayfs: idmapped layers are currently not supported
	[  +4.740983] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:38] overlayfs: idmapped layers are currently not supported
	[ +12.897300] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:39] overlayfs: idmapped layers are currently not supported
	[  +4.104516] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:41] overlayfs: idmapped layers are currently not supported
	[  +1.990678] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:42] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [e013c184e6e3ac3b12ebb1e788f88a522df87d865c2cded32ce1ba2140687d59] <==
	{"level":"warn","ts":"2025-10-03T19:42:04.454013Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:42:04.478775Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:42:04.495674Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:42:04.506969Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:42:04.537780Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:42:04.548384Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:42:04.567855Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:42:04.591694Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:42:04.635974Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:42:04.662926Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:42:04.700030Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:42:04.713102Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:42:04.727703Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:42:04.747432Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:42:04.759849Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:42:04.792768Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:42:04.803043Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:42:04.819061Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:42:04.836677Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:42:04.855192Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:42:04.892512Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:42:04.900400Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:42:04.932158Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:42:04.955220Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:42:05.131651Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51248","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 19:42:14 up  2:24,  0 user,  load average: 5.65, 3.92, 2.72
	Linux newest-cni-277907 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0bb2708b8a68c6bf83e7c6ebde209424b7a34780db11c23f8c8ee479b9536089] <==
	I1003 19:42:07.894809       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1003 19:42:07.895876       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1003 19:42:07.899908       1 main.go:148] setting mtu 1500 for CNI 
	I1003 19:42:07.899981       1 main.go:178] kindnetd IP family: "ipv4"
	I1003 19:42:07.900024       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-03T19:42:08Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1003 19:42:08.095569       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1003 19:42:08.095665       1 controller.go:381] "Waiting for informer caches to sync"
	I1003 19:42:08.095700       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1003 19:42:08.096611       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [d54346ccf42f503b43643a2a4f2797f3f6219e7ebb4f15de4620be40f934e579] <==
	I1003 19:42:06.259998       1 policy_source.go:240] refreshing policies
	I1003 19:42:06.279709       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1003 19:42:06.288488       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1003 19:42:06.288512       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1003 19:42:06.288608       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1003 19:42:06.288648       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1003 19:42:06.288679       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1003 19:42:06.309016       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1003 19:42:06.309224       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1003 19:42:06.380453       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1003 19:42:06.380943       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1003 19:42:06.440083       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1003 19:42:06.489371       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1003 19:42:06.931188       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1003 19:42:07.188692       1 controller.go:667] quota admission added evaluator for: namespaces
	I1003 19:42:07.264888       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1003 19:42:07.344097       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1003 19:42:07.396423       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1003 19:42:07.482785       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1003 19:42:07.776762       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.163.249"}
	I1003 19:42:07.802725       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.191.83"}
	I1003 19:42:09.674890       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1003 19:42:09.872192       1 controller.go:667] quota admission added evaluator for: endpoints
	I1003 19:42:10.030835       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1003 19:42:10.128014       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [ef5fea601208f50b53f6eef5d5284a014ca62a5cdc7ba7676e680d130cb543cb] <==
	I1003 19:42:09.499126       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1003 19:42:09.499211       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1003 19:42:09.502594       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1003 19:42:09.504828       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1003 19:42:09.507234       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1003 19:42:09.508364       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1003 19:42:09.511309       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1003 19:42:09.515864       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1003 19:42:09.515941       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1003 19:42:09.517106       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1003 19:42:09.517599       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1003 19:42:09.517699       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1003 19:42:09.521736       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1003 19:42:09.521817       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1003 19:42:09.521856       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1003 19:42:09.528145       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1003 19:42:09.529199       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1003 19:42:09.529314       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1003 19:42:09.531263       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1003 19:42:09.531285       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1003 19:42:09.549886       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1003 19:42:09.552062       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1003 19:42:09.552143       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1003 19:42:09.565483       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1003 19:42:09.567835       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	
	
	==> kube-proxy [e9387afb5b6ce8d012cecb62f497dd44a46bfcfa85872e279424e14948ca19e3] <==
	I1003 19:42:07.975330       1 server_linux.go:53] "Using iptables proxy"
	I1003 19:42:08.078273       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1003 19:42:08.188804       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1003 19:42:08.200881       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1003 19:42:08.205411       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1003 19:42:08.518931       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1003 19:42:08.519053       1 server_linux.go:132] "Using iptables Proxier"
	I1003 19:42:08.522933       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1003 19:42:08.523320       1 server.go:527] "Version info" version="v1.34.1"
	I1003 19:42:08.523503       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1003 19:42:08.524937       1 config.go:200] "Starting service config controller"
	I1003 19:42:08.525009       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1003 19:42:08.525051       1 config.go:106] "Starting endpoint slice config controller"
	I1003 19:42:08.525081       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1003 19:42:08.525114       1 config.go:403] "Starting serviceCIDR config controller"
	I1003 19:42:08.525142       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1003 19:42:08.525854       1 config.go:309] "Starting node config controller"
	I1003 19:42:08.525903       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1003 19:42:08.525930       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1003 19:42:08.625626       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1003 19:42:08.625763       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1003 19:42:08.625778       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [19786ebd68db6b6c5bd023f0384178b772b9a909a9ca5278f768374892e103d8] <==
	I1003 19:42:06.987303       1 serving.go:386] Generated self-signed cert in-memory
	I1003 19:42:08.586895       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1003 19:42:08.586933       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1003 19:42:08.592566       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1003 19:42:08.594588       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1003 19:42:08.594629       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1003 19:42:08.594654       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1003 19:42:08.597583       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1003 19:42:08.597606       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1003 19:42:08.597624       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1003 19:42:08.597631       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1003 19:42:08.695739       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1003 19:42:08.698356       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1003 19:42:08.698475       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 03 19:42:06 newest-cni-277907 kubelet[728]: I1003 19:42:06.568891     728 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-277907"
	Oct 03 19:42:06 newest-cni-277907 kubelet[728]: I1003 19:42:06.568982     728 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-277907"
	Oct 03 19:42:06 newest-cni-277907 kubelet[728]: I1003 19:42:06.569007     728 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 03 19:42:06 newest-cni-277907 kubelet[728]: I1003 19:42:06.570211     728 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 03 19:42:06 newest-cni-277907 kubelet[728]: I1003 19:42:06.585395     728 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-277907"
	Oct 03 19:42:06 newest-cni-277907 kubelet[728]: E1003 19:42:06.634451     728 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-277907\" already exists" pod="kube-system/kube-apiserver-newest-cni-277907"
	Oct 03 19:42:06 newest-cni-277907 kubelet[728]: I1003 19:42:06.634486     728 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-277907"
	Oct 03 19:42:06 newest-cni-277907 kubelet[728]: E1003 19:42:06.680911     728 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-277907\" already exists" pod="kube-system/kube-controller-manager-newest-cni-277907"
	Oct 03 19:42:06 newest-cni-277907 kubelet[728]: I1003 19:42:06.680945     728 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-277907"
	Oct 03 19:42:06 newest-cni-277907 kubelet[728]: E1003 19:42:06.704504     728 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-277907\" already exists" pod="kube-system/kube-scheduler-newest-cni-277907"
	Oct 03 19:42:06 newest-cni-277907 kubelet[728]: I1003 19:42:06.704540     728 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-277907"
	Oct 03 19:42:06 newest-cni-277907 kubelet[728]: E1003 19:42:06.729886     728 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-277907\" already exists" pod="kube-system/etcd-newest-cni-277907"
	Oct 03 19:42:07 newest-cni-277907 kubelet[728]: I1003 19:42:07.266879     728 apiserver.go:52] "Watching apiserver"
	Oct 03 19:42:07 newest-cni-277907 kubelet[728]: I1003 19:42:07.285355     728 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 03 19:42:07 newest-cni-277907 kubelet[728]: I1003 19:42:07.335720     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3e843f2f-9e62-4da8-a413-b23a4e8c33ef-xtables-lock\") pod \"kube-proxy-2ss46\" (UID: \"3e843f2f-9e62-4da8-a413-b23a4e8c33ef\") " pod="kube-system/kube-proxy-2ss46"
	Oct 03 19:42:07 newest-cni-277907 kubelet[728]: I1003 19:42:07.335946     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/efbd6505-dbd9-4229-9f30-5de99ce9258e-cni-cfg\") pod \"kindnet-b6wxk\" (UID: \"efbd6505-dbd9-4229-9f30-5de99ce9258e\") " pod="kube-system/kindnet-b6wxk"
	Oct 03 19:42:07 newest-cni-277907 kubelet[728]: I1003 19:42:07.336060     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/efbd6505-dbd9-4229-9f30-5de99ce9258e-xtables-lock\") pod \"kindnet-b6wxk\" (UID: \"efbd6505-dbd9-4229-9f30-5de99ce9258e\") " pod="kube-system/kindnet-b6wxk"
	Oct 03 19:42:07 newest-cni-277907 kubelet[728]: I1003 19:42:07.336157     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/efbd6505-dbd9-4229-9f30-5de99ce9258e-lib-modules\") pod \"kindnet-b6wxk\" (UID: \"efbd6505-dbd9-4229-9f30-5de99ce9258e\") " pod="kube-system/kindnet-b6wxk"
	Oct 03 19:42:07 newest-cni-277907 kubelet[728]: I1003 19:42:07.336256     728 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e843f2f-9e62-4da8-a413-b23a4e8c33ef-lib-modules\") pod \"kube-proxy-2ss46\" (UID: \"3e843f2f-9e62-4da8-a413-b23a4e8c33ef\") " pod="kube-system/kube-proxy-2ss46"
	Oct 03 19:42:07 newest-cni-277907 kubelet[728]: I1003 19:42:07.372131     728 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 03 19:42:07 newest-cni-277907 kubelet[728]: W1003 19:42:07.603186     728 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/8b59090431046f9d951b48ace59a9091019f835007d577cd4555f6908daa6561/crio-b533272fbef9f4ef6ed9587f60d1578d56c725838904b1e44f52a8a47d9678d5 WatchSource:0}: Error finding container b533272fbef9f4ef6ed9587f60d1578d56c725838904b1e44f52a8a47d9678d5: Status 404 returned error can't find the container with id b533272fbef9f4ef6ed9587f60d1578d56c725838904b1e44f52a8a47d9678d5
	Oct 03 19:42:07 newest-cni-277907 kubelet[728]: W1003 19:42:07.614246     728 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/8b59090431046f9d951b48ace59a9091019f835007d577cd4555f6908daa6561/crio-71d20ad61a98aed1ed611bf5682d771a6aa665e8c02bdaf3e4dbf56b9d943263 WatchSource:0}: Error finding container 71d20ad61a98aed1ed611bf5682d771a6aa665e8c02bdaf3e4dbf56b9d943263: Status 404 returned error can't find the container with id 71d20ad61a98aed1ed611bf5682d771a6aa665e8c02bdaf3e4dbf56b9d943263
	Oct 03 19:42:09 newest-cni-277907 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 03 19:42:09 newest-cni-277907 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 03 19:42:09 newest-cni-277907 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-277907 -n newest-cni-277907
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-277907 -n newest-cni-277907: exit status 2 (403.64286ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-277907 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-qvbbr storage-provisioner dashboard-metrics-scraper-6ffb444bf9-fzg6v kubernetes-dashboard-855c9754f9-v76lx
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-277907 describe pod coredns-66bc5c9577-qvbbr storage-provisioner dashboard-metrics-scraper-6ffb444bf9-fzg6v kubernetes-dashboard-855c9754f9-v76lx
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-277907 describe pod coredns-66bc5c9577-qvbbr storage-provisioner dashboard-metrics-scraper-6ffb444bf9-fzg6v kubernetes-dashboard-855c9754f9-v76lx: exit status 1 (91.617689ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-qvbbr" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-fzg6v" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-v76lx" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-277907 describe pod coredns-66bc5c9577-qvbbr storage-provisioner dashboard-metrics-scraper-6ffb444bf9-fzg6v kubernetes-dashboard-855c9754f9-v76lx: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (6.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (7.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-842797 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p default-k8s-diff-port-842797 --alsologtostderr -v=1: exit status 80 (2.311005905s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-842797 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 19:42:23.680512  500167 out.go:360] Setting OutFile to fd 1 ...
	I1003 19:42:23.681720  500167 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 19:42:23.681821  500167 out.go:374] Setting ErrFile to fd 2...
	I1003 19:42:23.681845  500167 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 19:42:23.682170  500167 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-284583/.minikube/bin
	I1003 19:42:23.682477  500167 out.go:368] Setting JSON to false
	I1003 19:42:23.682526  500167 mustload.go:65] Loading cluster: default-k8s-diff-port-842797
	I1003 19:42:23.685324  500167 config.go:182] Loaded profile config "default-k8s-diff-port-842797": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 19:42:23.685910  500167 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-842797 --format={{.State.Status}}
	I1003 19:42:23.761791  500167 host.go:66] Checking if "default-k8s-diff-port-842797" exists ...
	I1003 19:42:23.762093  500167 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 19:42:23.925558  500167 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:64 SystemTime:2025-10-03 19:42:23.914184964 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1003 19:42:23.926193  500167 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1758198818-20370/minikube-v1.37.0-1758198818-20370-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1758198818-20370-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-842797 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1003 19:42:23.930384  500167 out.go:179] * Pausing node default-k8s-diff-port-842797 ... 
	I1003 19:42:23.934477  500167 host.go:66] Checking if "default-k8s-diff-port-842797" exists ...
	I1003 19:42:23.934801  500167 ssh_runner.go:195] Run: systemctl --version
	I1003 19:42:23.934841  500167 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-842797
	I1003 19:42:23.989614  500167 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/default-k8s-diff-port-842797/id_rsa Username:docker}
	I1003 19:42:24.175830  500167 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 19:42:24.213358  500167 pause.go:51] kubelet running: true
	I1003 19:42:24.213433  500167 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1003 19:42:24.476280  500167 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1003 19:42:24.476362  500167 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1003 19:42:24.544949  500167 cri.go:89] found id: "fcd9a4b62f08b140b979667d54c075893895688705f962511f443c0a62e2c87a"
	I1003 19:42:24.544970  500167 cri.go:89] found id: "82b814a25f1f5e3ed6844334a8df2fe3ccfa2c194455da2c0e360c30e6aaca7e"
	I1003 19:42:24.544979  500167 cri.go:89] found id: "36a66515edb26dfbfcd4d2a7fd0c17ac0037c754a0f101544d32fe0f3d820b72"
	I1003 19:42:24.544983  500167 cri.go:89] found id: "9c16c456853d85ab9638243feba1261a05c0a8c713822477310b074fb4eb4723"
	I1003 19:42:24.544986  500167 cri.go:89] found id: "5a18bc974715b940aec68811c77d1e74f00fd5e65c2098ece1b868b46c87fb02"
	I1003 19:42:24.544989  500167 cri.go:89] found id: "02535cb7690885e90adcc200c551315486edf2d6f1bb2cbd015e185c373fe0c2"
	I1003 19:42:24.544992  500167 cri.go:89] found id: "72a3c6c093ee7526caa8d968d0ef1b63f258556b89c398a06f6b15295b410635"
	I1003 19:42:24.544995  500167 cri.go:89] found id: "95f720e182dbb5dbc9ca0b55d30ef0869679c1087e3e87174822cffb7d42a5ea"
	I1003 19:42:24.544999  500167 cri.go:89] found id: "a6485da9cdb1c66096d6663ef94b1c675b5cc8904328eba3b2537fa5c260cdba"
	I1003 19:42:24.545004  500167 cri.go:89] found id: "65412dde6368e824654930cf8979c4db2cbf6850df87b339d97d58e62d902100"
	I1003 19:42:24.545007  500167 cri.go:89] found id: "ebeaa951c920c3a9a1c23debb610071301437a5219a14cd30b8336ab848dfff9"
	I1003 19:42:24.545010  500167 cri.go:89] found id: ""
	I1003 19:42:24.545057  500167 ssh_runner.go:195] Run: sudo runc list -f json
	I1003 19:42:24.556096  500167 retry.go:31] will retry after 263.03107ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-03T19:42:24Z" level=error msg="open /run/runc: no such file or directory"
	I1003 19:42:24.819553  500167 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 19:42:24.832663  500167 pause.go:51] kubelet running: false
	I1003 19:42:24.832758  500167 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1003 19:42:25.020949  500167 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1003 19:42:25.021048  500167 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1003 19:42:25.116895  500167 cri.go:89] found id: "fcd9a4b62f08b140b979667d54c075893895688705f962511f443c0a62e2c87a"
	I1003 19:42:25.116977  500167 cri.go:89] found id: "82b814a25f1f5e3ed6844334a8df2fe3ccfa2c194455da2c0e360c30e6aaca7e"
	I1003 19:42:25.116984  500167 cri.go:89] found id: "36a66515edb26dfbfcd4d2a7fd0c17ac0037c754a0f101544d32fe0f3d820b72"
	I1003 19:42:25.116988  500167 cri.go:89] found id: "9c16c456853d85ab9638243feba1261a05c0a8c713822477310b074fb4eb4723"
	I1003 19:42:25.116992  500167 cri.go:89] found id: "5a18bc974715b940aec68811c77d1e74f00fd5e65c2098ece1b868b46c87fb02"
	I1003 19:42:25.116995  500167 cri.go:89] found id: "02535cb7690885e90adcc200c551315486edf2d6f1bb2cbd015e185c373fe0c2"
	I1003 19:42:25.116999  500167 cri.go:89] found id: "72a3c6c093ee7526caa8d968d0ef1b63f258556b89c398a06f6b15295b410635"
	I1003 19:42:25.117001  500167 cri.go:89] found id: "95f720e182dbb5dbc9ca0b55d30ef0869679c1087e3e87174822cffb7d42a5ea"
	I1003 19:42:25.117004  500167 cri.go:89] found id: "a6485da9cdb1c66096d6663ef94b1c675b5cc8904328eba3b2537fa5c260cdba"
	I1003 19:42:25.117022  500167 cri.go:89] found id: "65412dde6368e824654930cf8979c4db2cbf6850df87b339d97d58e62d902100"
	I1003 19:42:25.117074  500167 cri.go:89] found id: "ebeaa951c920c3a9a1c23debb610071301437a5219a14cd30b8336ab848dfff9"
	I1003 19:42:25.117090  500167 cri.go:89] found id: ""
	I1003 19:42:25.117174  500167 ssh_runner.go:195] Run: sudo runc list -f json
	I1003 19:42:25.133875  500167 retry.go:31] will retry after 475.979062ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-03T19:42:25Z" level=error msg="open /run/runc: no such file or directory"
	I1003 19:42:25.610592  500167 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 19:42:25.623706  500167 pause.go:51] kubelet running: false
	I1003 19:42:25.623786  500167 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1003 19:42:25.787372  500167 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1003 19:42:25.787462  500167 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1003 19:42:25.854122  500167 cri.go:89] found id: "fcd9a4b62f08b140b979667d54c075893895688705f962511f443c0a62e2c87a"
	I1003 19:42:25.854142  500167 cri.go:89] found id: "82b814a25f1f5e3ed6844334a8df2fe3ccfa2c194455da2c0e360c30e6aaca7e"
	I1003 19:42:25.854147  500167 cri.go:89] found id: "36a66515edb26dfbfcd4d2a7fd0c17ac0037c754a0f101544d32fe0f3d820b72"
	I1003 19:42:25.854151  500167 cri.go:89] found id: "9c16c456853d85ab9638243feba1261a05c0a8c713822477310b074fb4eb4723"
	I1003 19:42:25.854157  500167 cri.go:89] found id: "5a18bc974715b940aec68811c77d1e74f00fd5e65c2098ece1b868b46c87fb02"
	I1003 19:42:25.854160  500167 cri.go:89] found id: "02535cb7690885e90adcc200c551315486edf2d6f1bb2cbd015e185c373fe0c2"
	I1003 19:42:25.854163  500167 cri.go:89] found id: "72a3c6c093ee7526caa8d968d0ef1b63f258556b89c398a06f6b15295b410635"
	I1003 19:42:25.854166  500167 cri.go:89] found id: "95f720e182dbb5dbc9ca0b55d30ef0869679c1087e3e87174822cffb7d42a5ea"
	I1003 19:42:25.854169  500167 cri.go:89] found id: "a6485da9cdb1c66096d6663ef94b1c675b5cc8904328eba3b2537fa5c260cdba"
	I1003 19:42:25.854176  500167 cri.go:89] found id: "65412dde6368e824654930cf8979c4db2cbf6850df87b339d97d58e62d902100"
	I1003 19:42:25.854179  500167 cri.go:89] found id: "ebeaa951c920c3a9a1c23debb610071301437a5219a14cd30b8336ab848dfff9"
	I1003 19:42:25.854182  500167 cri.go:89] found id: ""
	I1003 19:42:25.854239  500167 ssh_runner.go:195] Run: sudo runc list -f json
	I1003 19:42:25.869555  500167 out.go:203] 
	W1003 19:42:25.872610  500167 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-03T19:42:25Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-03T19:42:25Z" level=error msg="open /run/runc: no such file or directory"
	
	W1003 19:42:25.872632  500167 out.go:285] * 
	* 
	W1003 19:42:25.880084  500167 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1003 19:42:25.883085  500167 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p default-k8s-diff-port-842797 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-842797
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-842797:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dd1cbce823c3c68d280f6d6431457674ab5e928f19effd4b41908fc33cc07deb",
	        "Created": "2025-10-03T19:39:31.38545341Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 493057,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-03T19:41:18.62100953Z",
	            "FinishedAt": "2025-10-03T19:41:17.637038136Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/dd1cbce823c3c68d280f6d6431457674ab5e928f19effd4b41908fc33cc07deb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dd1cbce823c3c68d280f6d6431457674ab5e928f19effd4b41908fc33cc07deb/hostname",
	        "HostsPath": "/var/lib/docker/containers/dd1cbce823c3c68d280f6d6431457674ab5e928f19effd4b41908fc33cc07deb/hosts",
	        "LogPath": "/var/lib/docker/containers/dd1cbce823c3c68d280f6d6431457674ab5e928f19effd4b41908fc33cc07deb/dd1cbce823c3c68d280f6d6431457674ab5e928f19effd4b41908fc33cc07deb-json.log",
	        "Name": "/default-k8s-diff-port-842797",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-842797:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-842797",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dd1cbce823c3c68d280f6d6431457674ab5e928f19effd4b41908fc33cc07deb",
	                "LowerDir": "/var/lib/docker/overlay2/bbf4d12c39f5d56f33173d11971fd8a2d5507eec84c402825790261c2e06dc86-init/diff:/var/lib/docker/overlay2/87b205803817b0b71a214d995ab7e10a92033bbf72d76d6e052f1d21ccecb313/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bbf4d12c39f5d56f33173d11971fd8a2d5507eec84c402825790261c2e06dc86/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bbf4d12c39f5d56f33173d11971fd8a2d5507eec84c402825790261c2e06dc86/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bbf4d12c39f5d56f33173d11971fd8a2d5507eec84c402825790261c2e06dc86/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-842797",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-842797/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-842797",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-842797",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-842797",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f8d774cbac349670b074773b2cb33dbd7126ee574e68713c0acc9f070ee9aa75",
	            "SandboxKey": "/var/run/docker/netns/f8d774cbac34",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33458"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33459"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33462"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33460"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33461"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-842797": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "be:aa:e4:04:72:84",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b6308a07ab6648978544dad609ae3a504a2e2942784508f1578ab5933d54e3b9",
	                    "EndpointID": "7a28daf2d0e8c76456f41e2238f4ce8a66988716f1b1511e71db04eeb0dffeb0",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-842797",
	                        "dd1cbce823c3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-842797 -n default-k8s-diff-port-842797
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-842797 -n default-k8s-diff-port-842797: exit status 2 (330.984203ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-842797 logs -n 25
E1003 19:42:26.869278  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/old-k8s-version-174543/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-842797 logs -n 25: (1.331476537s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p default-k8s-diff-port-842797 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-842797 │ jenkins │ v1.37.0 │ 03 Oct 25 19:39 UTC │ 03 Oct 25 19:40 UTC │
	│ addons  │ enable metrics-server -p embed-certs-327416 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-327416           │ jenkins │ v1.37.0 │ 03 Oct 25 19:39 UTC │                     │
	│ stop    │ -p embed-certs-327416 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-327416           │ jenkins │ v1.37.0 │ 03 Oct 25 19:39 UTC │ 03 Oct 25 19:39 UTC │
	│ addons  │ enable dashboard -p embed-certs-327416 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-327416           │ jenkins │ v1.37.0 │ 03 Oct 25 19:39 UTC │ 03 Oct 25 19:39 UTC │
	│ start   │ -p embed-certs-327416 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-327416           │ jenkins │ v1.37.0 │ 03 Oct 25 19:39 UTC │ 03 Oct 25 19:40 UTC │
	│ image   │ embed-certs-327416 image list --format=json                                                                                                                                                                                                   │ embed-certs-327416           │ jenkins │ v1.37.0 │ 03 Oct 25 19:40 UTC │ 03 Oct 25 19:40 UTC │
	│ pause   │ -p embed-certs-327416 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-327416           │ jenkins │ v1.37.0 │ 03 Oct 25 19:40 UTC │                     │
	│ delete  │ -p embed-certs-327416                                                                                                                                                                                                                         │ embed-certs-327416           │ jenkins │ v1.37.0 │ 03 Oct 25 19:40 UTC │ 03 Oct 25 19:41 UTC │
	│ delete  │ -p embed-certs-327416                                                                                                                                                                                                                         │ embed-certs-327416           │ jenkins │ v1.37.0 │ 03 Oct 25 19:41 UTC │ 03 Oct 25 19:41 UTC │
	│ start   │ -p newest-cni-277907 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-277907            │ jenkins │ v1.37.0 │ 03 Oct 25 19:41 UTC │ 03 Oct 25 19:41 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-842797 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-842797 │ jenkins │ v1.37.0 │ 03 Oct 25 19:41 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-842797 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-842797 │ jenkins │ v1.37.0 │ 03 Oct 25 19:41 UTC │ 03 Oct 25 19:41 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-842797 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-842797 │ jenkins │ v1.37.0 │ 03 Oct 25 19:41 UTC │ 03 Oct 25 19:41 UTC │
	│ start   │ -p default-k8s-diff-port-842797 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-842797 │ jenkins │ v1.37.0 │ 03 Oct 25 19:41 UTC │ 03 Oct 25 19:42 UTC │
	│ addons  │ enable metrics-server -p newest-cni-277907 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-277907            │ jenkins │ v1.37.0 │ 03 Oct 25 19:41 UTC │                     │
	│ stop    │ -p newest-cni-277907 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-277907            │ jenkins │ v1.37.0 │ 03 Oct 25 19:41 UTC │ 03 Oct 25 19:41 UTC │
	│ addons  │ enable dashboard -p newest-cni-277907 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-277907            │ jenkins │ v1.37.0 │ 03 Oct 25 19:41 UTC │ 03 Oct 25 19:41 UTC │
	│ start   │ -p newest-cni-277907 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-277907            │ jenkins │ v1.37.0 │ 03 Oct 25 19:41 UTC │ 03 Oct 25 19:42 UTC │
	│ image   │ newest-cni-277907 image list --format=json                                                                                                                                                                                                    │ newest-cni-277907            │ jenkins │ v1.37.0 │ 03 Oct 25 19:42 UTC │ 03 Oct 25 19:42 UTC │
	│ pause   │ -p newest-cni-277907 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-277907            │ jenkins │ v1.37.0 │ 03 Oct 25 19:42 UTC │                     │
	│ delete  │ -p newest-cni-277907                                                                                                                                                                                                                          │ newest-cni-277907            │ jenkins │ v1.37.0 │ 03 Oct 25 19:42 UTC │ 03 Oct 25 19:42 UTC │
	│ delete  │ -p newest-cni-277907                                                                                                                                                                                                                          │ newest-cni-277907            │ jenkins │ v1.37.0 │ 03 Oct 25 19:42 UTC │ 03 Oct 25 19:42 UTC │
	│ start   │ -p auto-388132 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-388132                  │ jenkins │ v1.37.0 │ 03 Oct 25 19:42 UTC │                     │
	│ image   │ default-k8s-diff-port-842797 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-842797 │ jenkins │ v1.37.0 │ 03 Oct 25 19:42 UTC │ 03 Oct 25 19:42 UTC │
	│ pause   │ -p default-k8s-diff-port-842797 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-842797 │ jenkins │ v1.37.0 │ 03 Oct 25 19:42 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/03 19:42:17
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 19:42:17.786389  499560 out.go:360] Setting OutFile to fd 1 ...
	I1003 19:42:17.786718  499560 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 19:42:17.786733  499560 out.go:374] Setting ErrFile to fd 2...
	I1003 19:42:17.786738  499560 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 19:42:17.787042  499560 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-284583/.minikube/bin
	I1003 19:42:17.787535  499560 out.go:368] Setting JSON to false
	I1003 19:42:17.789306  499560 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8689,"bootTime":1759511849,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1003 19:42:17.789379  499560 start.go:140] virtualization:  
	I1003 19:42:17.793178  499560 out.go:179] * [auto-388132] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1003 19:42:17.797436  499560 out.go:179]   - MINIKUBE_LOCATION=21625
	I1003 19:42:17.797558  499560 notify.go:220] Checking for updates...
	I1003 19:42:17.803658  499560 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 19:42:17.806749  499560 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21625-284583/kubeconfig
	I1003 19:42:17.809874  499560 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-284583/.minikube
	I1003 19:42:17.812884  499560 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1003 19:42:17.815791  499560 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 19:42:17.819344  499560 config.go:182] Loaded profile config "default-k8s-diff-port-842797": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 19:42:17.819459  499560 driver.go:421] Setting default libvirt URI to qemu:///system
	I1003 19:42:17.850509  499560 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1003 19:42:17.850692  499560 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 19:42:17.913172  499560 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-03 19:42:17.903903584 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1003 19:42:17.913284  499560 docker.go:318] overlay module found
	I1003 19:42:17.916621  499560 out.go:179] * Using the docker driver based on user configuration
	I1003 19:42:17.919537  499560 start.go:304] selected driver: docker
	I1003 19:42:17.919562  499560 start.go:924] validating driver "docker" against <nil>
	I1003 19:42:17.919576  499560 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 19:42:17.920305  499560 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 19:42:17.975057  499560 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-03 19:42:17.965099423 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1003 19:42:17.975214  499560 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1003 19:42:17.975443  499560 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 19:42:17.978318  499560 out.go:179] * Using Docker driver with root privileges
	I1003 19:42:17.981193  499560 cni.go:84] Creating CNI manager for ""
	I1003 19:42:17.981260  499560 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 19:42:17.981271  499560 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1003 19:42:17.981364  499560 start.go:348] cluster config:
	{Name:auto-388132 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-388132 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1003 19:42:17.984580  499560 out.go:179] * Starting "auto-388132" primary control-plane node in "auto-388132" cluster
	I1003 19:42:17.987538  499560 cache.go:123] Beginning downloading kic base image for docker with crio
	I1003 19:42:17.990537  499560 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1003 19:42:17.993439  499560 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 19:42:17.993497  499560 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21625-284583/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1003 19:42:17.993514  499560 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1003 19:42:17.993524  499560 cache.go:58] Caching tarball of preloaded images
	I1003 19:42:17.993606  499560 preload.go:233] Found /home/jenkins/minikube-integration/21625-284583/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1003 19:42:17.993618  499560 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1003 19:42:17.993753  499560 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/auto-388132/config.json ...
	I1003 19:42:17.993773  499560 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/auto-388132/config.json: {Name:mk52f18c750ffc1bdc804c16ea0e659fba654944 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:42:18.016603  499560 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1003 19:42:18.016658  499560 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1003 19:42:18.016679  499560 cache.go:232] Successfully downloaded all kic artifacts
	I1003 19:42:18.016707  499560 start.go:360] acquireMachinesLock for auto-388132: {Name:mk482e213d3b646dc96ebdd1779b41e5389cb65b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 19:42:18.016919  499560 start.go:364] duration metric: took 141.664µs to acquireMachinesLock for "auto-388132"
	I1003 19:42:18.016958  499560 start.go:93] Provisioning new machine with config: &{Name:auto-388132 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-388132 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1003 19:42:18.017039  499560 start.go:125] createHost starting for "" (driver="docker")
	I1003 19:42:18.020531  499560 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1003 19:42:18.020857  499560 start.go:159] libmachine.API.Create for "auto-388132" (driver="docker")
	I1003 19:42:18.020913  499560 client.go:168] LocalClient.Create starting
	I1003 19:42:18.020998  499560 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem
	I1003 19:42:18.021039  499560 main.go:141] libmachine: Decoding PEM data...
	I1003 19:42:18.021055  499560 main.go:141] libmachine: Parsing certificate...
	I1003 19:42:18.021127  499560 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem
	I1003 19:42:18.021154  499560 main.go:141] libmachine: Decoding PEM data...
	I1003 19:42:18.021166  499560 main.go:141] libmachine: Parsing certificate...
	I1003 19:42:18.021545  499560 cli_runner.go:164] Run: docker network inspect auto-388132 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1003 19:42:18.039694  499560 cli_runner.go:211] docker network inspect auto-388132 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1003 19:42:18.039813  499560 network_create.go:284] running [docker network inspect auto-388132] to gather additional debugging logs...
	I1003 19:42:18.039835  499560 cli_runner.go:164] Run: docker network inspect auto-388132
	W1003 19:42:18.059275  499560 cli_runner.go:211] docker network inspect auto-388132 returned with exit code 1
	I1003 19:42:18.059315  499560 network_create.go:287] error running [docker network inspect auto-388132]: docker network inspect auto-388132: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-388132 not found
	I1003 19:42:18.059330  499560 network_create.go:289] output of [docker network inspect auto-388132]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-388132 not found
	
	** /stderr **
	I1003 19:42:18.059422  499560 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 19:42:18.077571  499560 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-3a8a28910ba8 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6e:7a:d0:f8:54:63} reservation:<nil>}
	I1003 19:42:18.077954  499560 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-157403cbb468 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:8a:ee:cb:12:bf:d0} reservation:<nil>}
	I1003 19:42:18.078194  499560 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8d1e24f7a986 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:9e:1b:b1:d8:1a:13} reservation:<nil>}
	I1003 19:42:18.078507  499560 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-b6308a07ab66 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:16:65:a0:2b:76:e2} reservation:<nil>}
	I1003 19:42:18.078931  499560 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019d15a0}
	I1003 19:42:18.078977  499560 network_create.go:124] attempt to create docker network auto-388132 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1003 19:42:18.079046  499560 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-388132 auto-388132
	I1003 19:42:18.151940  499560 network_create.go:108] docker network auto-388132 192.168.85.0/24 created
	I1003 19:42:18.151976  499560 kic.go:121] calculated static IP "192.168.85.2" for the "auto-388132" container
	I1003 19:42:18.152056  499560 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1003 19:42:18.169001  499560 cli_runner.go:164] Run: docker volume create auto-388132 --label name.minikube.sigs.k8s.io=auto-388132 --label created_by.minikube.sigs.k8s.io=true
	I1003 19:42:18.188521  499560 oci.go:103] Successfully created a docker volume auto-388132
	I1003 19:42:18.188603  499560 cli_runner.go:164] Run: docker run --rm --name auto-388132-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-388132 --entrypoint /usr/bin/test -v auto-388132:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1003 19:42:18.695652  499560 oci.go:107] Successfully prepared a docker volume auto-388132
	I1003 19:42:18.695705  499560 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 19:42:18.695726  499560 kic.go:194] Starting extracting preloaded images to volume ...
	I1003 19:42:18.695796  499560 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21625-284583/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-388132:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	
	
	==> CRI-O <==
	Oct 03 19:42:07 default-k8s-diff-port-842797 crio[653]: time="2025-10-03T19:42:07.438582054Z" level=info msg="Started container" PID=1646 containerID=fcd9a4b62f08b140b979667d54c075893895688705f962511f443c0a62e2c87a description=kube-system/storage-provisioner/storage-provisioner id=8c283931-2add-42cd-bb20-f43852dab8b2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b2908953482167880b2083818a8908543f655fb150f84dd673e49afa7137a542
	Oct 03 19:42:13 default-k8s-diff-port-842797 crio[653]: time="2025-10-03T19:42:13.054966075Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=aad187d1-bcc8-45b8-b449-eca432fcd67a name=/runtime.v1.ImageService/ImageStatus
	Oct 03 19:42:13 default-k8s-diff-port-842797 crio[653]: time="2025-10-03T19:42:13.060666817Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=75c5bf42-3ac8-4e91-a2e5-80320eab6ce7 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 19:42:13 default-k8s-diff-port-842797 crio[653]: time="2025-10-03T19:42:13.064018664Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-t69v5/dashboard-metrics-scraper" id=c1caf0eb-68de-4cd4-aa8c-c2c3eac67f38 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:42:13 default-k8s-diff-port-842797 crio[653]: time="2025-10-03T19:42:13.064405985Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 19:42:13 default-k8s-diff-port-842797 crio[653]: time="2025-10-03T19:42:13.079042161Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 19:42:13 default-k8s-diff-port-842797 crio[653]: time="2025-10-03T19:42:13.079784318Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 19:42:13 default-k8s-diff-port-842797 crio[653]: time="2025-10-03T19:42:13.1015431Z" level=info msg="Created container 65412dde6368e824654930cf8979c4db2cbf6850df87b339d97d58e62d902100: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-t69v5/dashboard-metrics-scraper" id=c1caf0eb-68de-4cd4-aa8c-c2c3eac67f38 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:42:13 default-k8s-diff-port-842797 crio[653]: time="2025-10-03T19:42:13.102525671Z" level=info msg="Starting container: 65412dde6368e824654930cf8979c4db2cbf6850df87b339d97d58e62d902100" id=634253c7-b52b-44ed-bdd9-2e4de4e18641 name=/runtime.v1.RuntimeService/StartContainer
	Oct 03 19:42:13 default-k8s-diff-port-842797 crio[653]: time="2025-10-03T19:42:13.104304661Z" level=info msg="Started container" PID=1680 containerID=65412dde6368e824654930cf8979c4db2cbf6850df87b339d97d58e62d902100 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-t69v5/dashboard-metrics-scraper id=634253c7-b52b-44ed-bdd9-2e4de4e18641 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8ebb5ab1cd3b43dabf22a7875fc258bb43da490419dfe7e452a2e0b58810bb4a
	Oct 03 19:42:13 default-k8s-diff-port-842797 crio[653]: time="2025-10-03T19:42:13.412238854Z" level=info msg="Removing container: cb6ba62a79524c022af13411f2d607a2158a3f64d4fcbae344b14b0c3f296a83" id=972176b2-ade1-42bc-864c-96273b0c8f3f name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 03 19:42:13 default-k8s-diff-port-842797 crio[653]: time="2025-10-03T19:42:13.423913047Z" level=info msg="Error loading conmon cgroup of container cb6ba62a79524c022af13411f2d607a2158a3f64d4fcbae344b14b0c3f296a83: cgroup deleted" id=972176b2-ade1-42bc-864c-96273b0c8f3f name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 03 19:42:13 default-k8s-diff-port-842797 crio[653]: time="2025-10-03T19:42:13.431128036Z" level=info msg="Removed container cb6ba62a79524c022af13411f2d607a2158a3f64d4fcbae344b14b0c3f296a83: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-t69v5/dashboard-metrics-scraper" id=972176b2-ade1-42bc-864c-96273b0c8f3f name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 03 19:42:16 default-k8s-diff-port-842797 crio[653]: time="2025-10-03T19:42:16.121141119Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 03 19:42:16 default-k8s-diff-port-842797 crio[653]: time="2025-10-03T19:42:16.124912074Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 03 19:42:16 default-k8s-diff-port-842797 crio[653]: time="2025-10-03T19:42:16.124950065Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 03 19:42:16 default-k8s-diff-port-842797 crio[653]: time="2025-10-03T19:42:16.124981359Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 03 19:42:16 default-k8s-diff-port-842797 crio[653]: time="2025-10-03T19:42:16.128345096Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 03 19:42:16 default-k8s-diff-port-842797 crio[653]: time="2025-10-03T19:42:16.128379156Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 03 19:42:16 default-k8s-diff-port-842797 crio[653]: time="2025-10-03T19:42:16.128401926Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 03 19:42:16 default-k8s-diff-port-842797 crio[653]: time="2025-10-03T19:42:16.136255653Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 03 19:42:16 default-k8s-diff-port-842797 crio[653]: time="2025-10-03T19:42:16.136291223Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 03 19:42:16 default-k8s-diff-port-842797 crio[653]: time="2025-10-03T19:42:16.136313032Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 03 19:42:16 default-k8s-diff-port-842797 crio[653]: time="2025-10-03T19:42:16.141041517Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 03 19:42:16 default-k8s-diff-port-842797 crio[653]: time="2025-10-03T19:42:16.141083864Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	65412dde6368e       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           13 seconds ago       Exited              dashboard-metrics-scraper   2                   8ebb5ab1cd3b4       dashboard-metrics-scraper-6ffb444bf9-t69v5             kubernetes-dashboard
	fcd9a4b62f08b       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           19 seconds ago       Running             storage-provisioner         2                   b290895348216       storage-provisioner                                    kube-system
	ebeaa951c920c       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   39 seconds ago       Running             kubernetes-dashboard        0                   437b1214d3710       kubernetes-dashboard-855c9754f9-ll25f                  kubernetes-dashboard
	82b814a25f1f5       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           51 seconds ago       Running             coredns                     1                   046b501bedc89       coredns-66bc5c9577-l8knz                               kube-system
	36a66515edb26       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           51 seconds ago       Running             kindnet-cni                 1                   22637bc751c87       kindnet-96q8s                                          kube-system
	ec43e1559e952       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           51 seconds ago       Running             busybox                     1                   87a991bd17a1c       busybox                                                default
	9c16c456853d8       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           51 seconds ago       Running             kube-proxy                  1                   041cadd896bc1       kube-proxy-gvslj                                       kube-system
	5a18bc974715b       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           51 seconds ago       Exited              storage-provisioner         1                   b290895348216       storage-provisioner                                    kube-system
	02535cb769088       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   eb1f316826838       kube-apiserver-default-k8s-diff-port-842797            kube-system
	72a3c6c093ee7       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   7e5d1aa095aac       kube-scheduler-default-k8s-diff-port-842797            kube-system
	95f720e182dbb       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   2e71251514078       etcd-default-k8s-diff-port-842797                      kube-system
	a6485da9cdb1c       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   91c55381e919a       kube-controller-manager-default-k8s-diff-port-842797   kube-system
	
	
	==> coredns [82b814a25f1f5e3ed6844334a8df2fe3ccfa2c194455da2c0e360c30e6aaca7e] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35110 - 39026 "HINFO IN 3196626566350724291.1470925459604482367. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015689967s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-842797
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-842797
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a43873c79fc22f8b1ccd29d3dfa635d392b09335
	                    minikube.k8s.io/name=default-k8s-diff-port-842797
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_03T19_40_05_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 03 Oct 2025 19:40:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-842797
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 03 Oct 2025 19:42:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 03 Oct 2025 19:42:04 +0000   Fri, 03 Oct 2025 19:39:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 03 Oct 2025 19:42:04 +0000   Fri, 03 Oct 2025 19:39:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 03 Oct 2025 19:42:04 +0000   Fri, 03 Oct 2025 19:39:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 03 Oct 2025 19:42:04 +0000   Fri, 03 Oct 2025 19:40:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-842797
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 6976841640c04eeba284ef295c75e540
	  System UUID:                0315913a-ac76-434b-8962-2420e3ad1d8e
	  Boot ID:                    3762136e-8bec-4104-a5cb-0b1976f6048e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-66bc5c9577-l8knz                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m17s
	  kube-system                 etcd-default-k8s-diff-port-842797                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m22s
	  kube-system                 kindnet-96q8s                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m17s
	  kube-system                 kube-apiserver-default-k8s-diff-port-842797             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-842797    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 kube-proxy-gvslj                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m17s
	  kube-system                 kube-scheduler-default-k8s-diff-port-842797             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m15s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-t69v5              0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-ll25f                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m15s                  kube-proxy       
	  Normal   Starting                 48s                    kube-proxy       
	  Warning  CgroupV1                 2m34s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m34s (x8 over 2m34s)  kubelet          Node default-k8s-diff-port-842797 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m34s (x8 over 2m34s)  kubelet          Node default-k8s-diff-port-842797 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m34s (x8 over 2m34s)  kubelet          Node default-k8s-diff-port-842797 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m23s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m23s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m22s                  kubelet          Node default-k8s-diff-port-842797 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m22s                  kubelet          Node default-k8s-diff-port-842797 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m22s                  kubelet          Node default-k8s-diff-port-842797 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m18s                  node-controller  Node default-k8s-diff-port-842797 event: Registered Node default-k8s-diff-port-842797 in Controller
	  Normal   NodeReady                96s                    kubelet          Node default-k8s-diff-port-842797 status is now: NodeReady
	  Normal   Starting                 62s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 62s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  61s (x8 over 61s)      kubelet          Node default-k8s-diff-port-842797 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    61s (x8 over 61s)      kubelet          Node default-k8s-diff-port-842797 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     61s (x8 over 61s)      kubelet          Node default-k8s-diff-port-842797 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           48s                    node-controller  Node default-k8s-diff-port-842797 event: Registered Node default-k8s-diff-port-842797 in Controller
	
	
	==> dmesg <==
	[ +24.839009] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:13] overlayfs: idmapped layers are currently not supported
	[ +26.493253] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:15] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:16] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:17] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000010] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Oct 3 19:18] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:20] overlayfs: idmapped layers are currently not supported
	[ +32.018892] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:22] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:24] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:26] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:32] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:34] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:35] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:36] overlayfs: idmapped layers are currently not supported
	[  +4.740983] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:38] overlayfs: idmapped layers are currently not supported
	[ +12.897300] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:39] overlayfs: idmapped layers are currently not supported
	[  +4.104516] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:41] overlayfs: idmapped layers are currently not supported
	[  +1.990678] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:42] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [95f720e182dbb5dbc9ca0b55d30ef0869679c1087e3e87174822cffb7d42a5ea] <==
	{"level":"warn","ts":"2025-10-03T19:41:31.128494Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:41:31.177188Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:41:31.207636Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:41:31.230433Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:41:31.265824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:41:31.289977Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:41:31.332610Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:41:31.347487Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:41:31.377932Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:41:31.403330Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:41:31.493006Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:41:31.494483Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:41:31.521981Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:41:31.546959Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:41:31.584011Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:41:31.612023Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:41:31.647428Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:41:31.686740Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:41:31.720303Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:41:31.800754Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:41:31.878396Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:41:31.909423Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:41:31.933025Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:41:31.969600Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:41:32.072882Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40706","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 19:42:27 up  2:24,  0 user,  load average: 5.07, 3.88, 2.72
	Linux default-k8s-diff-port-842797 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [36a66515edb26dfbfcd4d2a7fd0c17ac0037c754a0f101544d32fe0f3d820b72] <==
	I1003 19:41:35.876299       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1003 19:41:35.876691       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1003 19:41:35.876850       1 main.go:148] setting mtu 1500 for CNI 
	I1003 19:41:35.876863       1 main.go:178] kindnetd IP family: "ipv4"
	I1003 19:41:35.876875       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-03T19:41:36Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1003 19:41:36.121230       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1003 19:41:36.121255       1 controller.go:381] "Waiting for informer caches to sync"
	I1003 19:41:36.121264       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1003 19:41:36.136982       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1003 19:42:06.123157       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1003 19:42:06.123333       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1003 19:42:06.123440       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1003 19:42:06.137802       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1003 19:42:07.722250       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1003 19:42:07.722287       1 metrics.go:72] Registering metrics
	I1003 19:42:07.722354       1 controller.go:711] "Syncing nftables rules"
	I1003 19:42:16.120820       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1003 19:42:16.120875       1 main.go:301] handling current node
	I1003 19:42:26.120631       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1003 19:42:26.120667       1 main.go:301] handling current node
	
	
	==> kube-apiserver [02535cb7690885e90adcc200c551315486edf2d6f1bb2cbd015e185c373fe0c2] <==
	I1003 19:41:34.174415       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1003 19:41:34.174423       1 cache.go:39] Caches are synced for autoregister controller
	I1003 19:41:34.207834       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1003 19:41:34.229023       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1003 19:41:34.229134       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1003 19:41:34.229179       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1003 19:41:34.229232       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1003 19:41:34.229239       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1003 19:41:34.229321       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1003 19:41:34.229351       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1003 19:41:34.280884       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1003 19:41:34.280913       1 policy_source.go:240] refreshing policies
	I1003 19:41:34.304961       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1003 19:41:34.334894       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1003 19:41:34.409379       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1003 19:41:34.466172       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1003 19:41:36.744242       1 controller.go:667] quota admission added evaluator for: namespaces
	I1003 19:41:36.928046       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1003 19:41:37.124773       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1003 19:41:37.188829       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1003 19:41:37.518400       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.130.95"}
	I1003 19:41:37.534673       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.41.173"}
	I1003 19:41:39.625024       1 controller.go:667] quota admission added evaluator for: endpoints
	I1003 19:41:39.774074       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1003 19:41:39.825525       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [a6485da9cdb1c66096d6663ef94b1c675b5cc8904328eba3b2537fa5c260cdba] <==
	I1003 19:41:39.204890       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1003 19:41:39.204961       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-842797"
	I1003 19:41:39.205016       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1003 19:41:39.206588       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1003 19:41:39.208918       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1003 19:41:39.209059       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1003 19:41:39.216636       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1003 19:41:39.218315       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1003 19:41:39.218391       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1003 19:41:39.218601       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1003 19:41:39.218912       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1003 19:41:39.219129       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1003 19:41:39.219153       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1003 19:41:39.241216       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1003 19:41:39.243573       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1003 19:41:39.247862       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1003 19:41:39.255281       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1003 19:41:39.257743       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1003 19:41:39.263468       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1003 19:41:39.269871       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1003 19:41:39.269921       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1003 19:41:39.274322       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1003 19:41:39.274356       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1003 19:41:39.274364       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1003 19:41:39.289295       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [9c16c456853d85ab9638243feba1261a05c0a8c713822477310b074fb4eb4723] <==
	I1003 19:41:37.435345       1 server_linux.go:53] "Using iptables proxy"
	I1003 19:41:37.713867       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1003 19:41:37.838243       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1003 19:41:37.838360       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1003 19:41:37.838479       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1003 19:41:38.070953       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1003 19:41:38.071070       1 server_linux.go:132] "Using iptables Proxier"
	I1003 19:41:38.219743       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1003 19:41:38.220135       1 server.go:527] "Version info" version="v1.34.1"
	I1003 19:41:38.220199       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1003 19:41:38.222665       1 config.go:200] "Starting service config controller"
	I1003 19:41:38.222759       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1003 19:41:38.222805       1 config.go:106] "Starting endpoint slice config controller"
	I1003 19:41:38.222833       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1003 19:41:38.222876       1 config.go:403] "Starting serviceCIDR config controller"
	I1003 19:41:38.222903       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1003 19:41:38.227603       1 config.go:309] "Starting node config controller"
	I1003 19:41:38.232186       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1003 19:41:38.232241       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1003 19:41:38.323006       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1003 19:41:38.323018       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1003 19:41:38.327049       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [72a3c6c093ee7526caa8d968d0ef1b63f258556b89c398a06f6b15295b410635] <==
	I1003 19:41:31.172582       1 serving.go:386] Generated self-signed cert in-memory
	I1003 19:41:35.988704       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1003 19:41:36.007286       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1003 19:41:36.092424       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1003 19:41:36.092529       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1003 19:41:36.092551       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1003 19:41:36.092583       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1003 19:41:36.107743       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1003 19:41:36.107768       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1003 19:41:36.107787       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1003 19:41:36.107793       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1003 19:41:36.312453       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1003 19:41:36.312517       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1003 19:41:36.336348       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Oct 03 19:41:40 default-k8s-diff-port-842797 kubelet[782]: I1003 19:41:40.078497     782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/5b3fda27-6d63-4fd1-8e59-407c16cc358b-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-ll25f\" (UID: \"5b3fda27-6d63-4fd1-8e59-407c16cc358b\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-ll25f"
	Oct 03 19:41:40 default-k8s-diff-port-842797 kubelet[782]: I1003 19:41:40.078607     782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jvm2\" (UniqueName: \"kubernetes.io/projected/a9372a39-bc89-4ed9-8bd1-c11c31755813-kube-api-access-2jvm2\") pod \"dashboard-metrics-scraper-6ffb444bf9-t69v5\" (UID: \"a9372a39-bc89-4ed9-8bd1-c11c31755813\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-t69v5"
	Oct 03 19:41:40 default-k8s-diff-port-842797 kubelet[782]: I1003 19:41:40.078772     782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2j9p\" (UniqueName: \"kubernetes.io/projected/5b3fda27-6d63-4fd1-8e59-407c16cc358b-kube-api-access-x2j9p\") pod \"kubernetes-dashboard-855c9754f9-ll25f\" (UID: \"5b3fda27-6d63-4fd1-8e59-407c16cc358b\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-ll25f"
	Oct 03 19:41:40 default-k8s-diff-port-842797 kubelet[782]: I1003 19:41:40.109121     782 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 03 19:41:40 default-k8s-diff-port-842797 kubelet[782]: W1003 19:41:40.291413     782 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/dd1cbce823c3c68d280f6d6431457674ab5e928f19effd4b41908fc33cc07deb/crio-437b1214d371075f22c6ded412c43182d71a054a09a13a567185d722f8876c7b WatchSource:0}: Error finding container 437b1214d371075f22c6ded412c43182d71a054a09a13a567185d722f8876c7b: Status 404 returned error can't find the container with id 437b1214d371075f22c6ded412c43182d71a054a09a13a567185d722f8876c7b
	Oct 03 19:41:40 default-k8s-diff-port-842797 kubelet[782]: W1003 19:41:40.292199     782 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/dd1cbce823c3c68d280f6d6431457674ab5e928f19effd4b41908fc33cc07deb/crio-8ebb5ab1cd3b43dabf22a7875fc258bb43da490419dfe7e452a2e0b58810bb4a WatchSource:0}: Error finding container 8ebb5ab1cd3b43dabf22a7875fc258bb43da490419dfe7e452a2e0b58810bb4a: Status 404 returned error can't find the container with id 8ebb5ab1cd3b43dabf22a7875fc258bb43da490419dfe7e452a2e0b58810bb4a
	Oct 03 19:41:54 default-k8s-diff-port-842797 kubelet[782]: I1003 19:41:54.336478     782 scope.go:117] "RemoveContainer" containerID="eb5107aeafecd8279566dfc81100d4a288c2d56be8ce3bffcc2e790d76c13a76"
	Oct 03 19:41:54 default-k8s-diff-port-842797 kubelet[782]: I1003 19:41:54.374590     782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-ll25f" podStartSLOduration=8.542529471 podStartE2EDuration="15.374569239s" podCreationTimestamp="2025-10-03 19:41:39 +0000 UTC" firstStartedPulling="2025-10-03 19:41:40.294688102 +0000 UTC m=+14.587987913" lastFinishedPulling="2025-10-03 19:41:47.12672787 +0000 UTC m=+21.420027681" observedRunningTime="2025-10-03 19:41:47.351146298 +0000 UTC m=+21.644446108" watchObservedRunningTime="2025-10-03 19:41:54.374569239 +0000 UTC m=+28.667869058"
	Oct 03 19:41:55 default-k8s-diff-port-842797 kubelet[782]: I1003 19:41:55.340958     782 scope.go:117] "RemoveContainer" containerID="eb5107aeafecd8279566dfc81100d4a288c2d56be8ce3bffcc2e790d76c13a76"
	Oct 03 19:41:55 default-k8s-diff-port-842797 kubelet[782]: I1003 19:41:55.341773     782 scope.go:117] "RemoveContainer" containerID="cb6ba62a79524c022af13411f2d607a2158a3f64d4fcbae344b14b0c3f296a83"
	Oct 03 19:41:55 default-k8s-diff-port-842797 kubelet[782]: E1003 19:41:55.342062     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-t69v5_kubernetes-dashboard(a9372a39-bc89-4ed9-8bd1-c11c31755813)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-t69v5" podUID="a9372a39-bc89-4ed9-8bd1-c11c31755813"
	Oct 03 19:41:56 default-k8s-diff-port-842797 kubelet[782]: I1003 19:41:56.344798     782 scope.go:117] "RemoveContainer" containerID="cb6ba62a79524c022af13411f2d607a2158a3f64d4fcbae344b14b0c3f296a83"
	Oct 03 19:41:56 default-k8s-diff-port-842797 kubelet[782]: E1003 19:41:56.344955     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-t69v5_kubernetes-dashboard(a9372a39-bc89-4ed9-8bd1-c11c31755813)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-t69v5" podUID="a9372a39-bc89-4ed9-8bd1-c11c31755813"
	Oct 03 19:42:00 default-k8s-diff-port-842797 kubelet[782]: I1003 19:42:00.208490     782 scope.go:117] "RemoveContainer" containerID="cb6ba62a79524c022af13411f2d607a2158a3f64d4fcbae344b14b0c3f296a83"
	Oct 03 19:42:00 default-k8s-diff-port-842797 kubelet[782]: E1003 19:42:00.211381     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-t69v5_kubernetes-dashboard(a9372a39-bc89-4ed9-8bd1-c11c31755813)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-t69v5" podUID="a9372a39-bc89-4ed9-8bd1-c11c31755813"
	Oct 03 19:42:07 default-k8s-diff-port-842797 kubelet[782]: I1003 19:42:07.374470     782 scope.go:117] "RemoveContainer" containerID="5a18bc974715b940aec68811c77d1e74f00fd5e65c2098ece1b868b46c87fb02"
	Oct 03 19:42:13 default-k8s-diff-port-842797 kubelet[782]: I1003 19:42:13.053998     782 scope.go:117] "RemoveContainer" containerID="cb6ba62a79524c022af13411f2d607a2158a3f64d4fcbae344b14b0c3f296a83"
	Oct 03 19:42:13 default-k8s-diff-port-842797 kubelet[782]: I1003 19:42:13.400796     782 scope.go:117] "RemoveContainer" containerID="cb6ba62a79524c022af13411f2d607a2158a3f64d4fcbae344b14b0c3f296a83"
	Oct 03 19:42:13 default-k8s-diff-port-842797 kubelet[782]: I1003 19:42:13.401115     782 scope.go:117] "RemoveContainer" containerID="65412dde6368e824654930cf8979c4db2cbf6850df87b339d97d58e62d902100"
	Oct 03 19:42:13 default-k8s-diff-port-842797 kubelet[782]: E1003 19:42:13.401270     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-t69v5_kubernetes-dashboard(a9372a39-bc89-4ed9-8bd1-c11c31755813)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-t69v5" podUID="a9372a39-bc89-4ed9-8bd1-c11c31755813"
	Oct 03 19:42:20 default-k8s-diff-port-842797 kubelet[782]: I1003 19:42:20.207241     782 scope.go:117] "RemoveContainer" containerID="65412dde6368e824654930cf8979c4db2cbf6850df87b339d97d58e62d902100"
	Oct 03 19:42:20 default-k8s-diff-port-842797 kubelet[782]: E1003 19:42:20.207429     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-t69v5_kubernetes-dashboard(a9372a39-bc89-4ed9-8bd1-c11c31755813)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-t69v5" podUID="a9372a39-bc89-4ed9-8bd1-c11c31755813"
	Oct 03 19:42:24 default-k8s-diff-port-842797 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 03 19:42:24 default-k8s-diff-port-842797 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 03 19:42:24 default-k8s-diff-port-842797 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [ebeaa951c920c3a9a1c23debb610071301437a5219a14cd30b8336ab848dfff9] <==
	2025/10/03 19:41:47 Using namespace: kubernetes-dashboard
	2025/10/03 19:41:47 Using in-cluster config to connect to apiserver
	2025/10/03 19:41:47 Using secret token for csrf signing
	2025/10/03 19:41:47 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/03 19:41:47 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/03 19:41:47 Successful initial request to the apiserver, version: v1.34.1
	2025/10/03 19:41:47 Generating JWE encryption key
	2025/10/03 19:41:47 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/03 19:41:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/03 19:41:48 Initializing JWE encryption key from synchronized object
	2025/10/03 19:41:48 Creating in-cluster Sidecar client
	2025/10/03 19:41:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/03 19:41:48 Serving insecurely on HTTP port: 9090
	2025/10/03 19:42:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/03 19:41:47 Starting overwatch
	
	
	==> storage-provisioner [5a18bc974715b940aec68811c77d1e74f00fd5e65c2098ece1b868b46c87fb02] <==
	I1003 19:41:36.142438       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1003 19:42:06.366195       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [fcd9a4b62f08b140b979667d54c075893895688705f962511f443c0a62e2c87a] <==
	I1003 19:42:07.471885       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1003 19:42:07.498409       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1003 19:42:07.498555       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1003 19:42:07.504945       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:42:10.959756       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:42:15.220193       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:42:18.818606       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:42:21.872455       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:42:24.894647       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:42:24.900217       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1003 19:42:24.900363       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1003 19:42:24.900519       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-842797_75d7bf1b-5163-46dc-b538-e984e14535b7!
	I1003 19:42:24.900816       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cf2e8791-f5d6-4403-8f58-225b6bccc9d1", APIVersion:"v1", ResourceVersion:"680", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-842797_75d7bf1b-5163-46dc-b538-e984e14535b7 became leader
	W1003 19:42:24.914286       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:42:24.925416       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1003 19:42:25.001030       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-842797_75d7bf1b-5163-46dc-b538-e984e14535b7!
	W1003 19:42:26.930175       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:42:26.936074       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-842797 -n default-k8s-diff-port-842797
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-842797 -n default-k8s-diff-port-842797: exit status 2 (445.176635ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-842797 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-842797
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-842797:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dd1cbce823c3c68d280f6d6431457674ab5e928f19effd4b41908fc33cc07deb",
	        "Created": "2025-10-03T19:39:31.38545341Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 493057,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-03T19:41:18.62100953Z",
	            "FinishedAt": "2025-10-03T19:41:17.637038136Z"
	        },
	        "Image": "sha256:5f534d1f6dbdc6822bb3d07eb55e2a83d08e94cbdcc855a877b4f3dd1ac1278e",
	        "ResolvConfPath": "/var/lib/docker/containers/dd1cbce823c3c68d280f6d6431457674ab5e928f19effd4b41908fc33cc07deb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dd1cbce823c3c68d280f6d6431457674ab5e928f19effd4b41908fc33cc07deb/hostname",
	        "HostsPath": "/var/lib/docker/containers/dd1cbce823c3c68d280f6d6431457674ab5e928f19effd4b41908fc33cc07deb/hosts",
	        "LogPath": "/var/lib/docker/containers/dd1cbce823c3c68d280f6d6431457674ab5e928f19effd4b41908fc33cc07deb/dd1cbce823c3c68d280f6d6431457674ab5e928f19effd4b41908fc33cc07deb-json.log",
	        "Name": "/default-k8s-diff-port-842797",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-842797:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-842797",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dd1cbce823c3c68d280f6d6431457674ab5e928f19effd4b41908fc33cc07deb",
	                "LowerDir": "/var/lib/docker/overlay2/bbf4d12c39f5d56f33173d11971fd8a2d5507eec84c402825790261c2e06dc86-init/diff:/var/lib/docker/overlay2/87b205803817b0b71a214d995ab7e10a92033bbf72d76d6e052f1d21ccecb313/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bbf4d12c39f5d56f33173d11971fd8a2d5507eec84c402825790261c2e06dc86/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bbf4d12c39f5d56f33173d11971fd8a2d5507eec84c402825790261c2e06dc86/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bbf4d12c39f5d56f33173d11971fd8a2d5507eec84c402825790261c2e06dc86/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-842797",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-842797/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-842797",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-842797",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-842797",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f8d774cbac349670b074773b2cb33dbd7126ee574e68713c0acc9f070ee9aa75",
	            "SandboxKey": "/var/run/docker/netns/f8d774cbac34",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33458"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33459"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33462"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33460"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33461"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-842797": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "be:aa:e4:04:72:84",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b6308a07ab6648978544dad609ae3a504a2e2942784508f1578ab5933d54e3b9",
	                    "EndpointID": "7a28daf2d0e8c76456f41e2238f4ce8a66988716f1b1511e71db04eeb0dffeb0",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-842797",
	                        "dd1cbce823c3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-842797 -n default-k8s-diff-port-842797
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-842797 -n default-k8s-diff-port-842797: exit status 2 (419.4097ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-842797 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-842797 logs -n 25: (1.569911153s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p default-k8s-diff-port-842797 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-842797 │ jenkins │ v1.37.0 │ 03 Oct 25 19:39 UTC │ 03 Oct 25 19:40 UTC │
	│ addons  │ enable metrics-server -p embed-certs-327416 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-327416           │ jenkins │ v1.37.0 │ 03 Oct 25 19:39 UTC │                     │
	│ stop    │ -p embed-certs-327416 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-327416           │ jenkins │ v1.37.0 │ 03 Oct 25 19:39 UTC │ 03 Oct 25 19:39 UTC │
	│ addons  │ enable dashboard -p embed-certs-327416 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-327416           │ jenkins │ v1.37.0 │ 03 Oct 25 19:39 UTC │ 03 Oct 25 19:39 UTC │
	│ start   │ -p embed-certs-327416 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-327416           │ jenkins │ v1.37.0 │ 03 Oct 25 19:39 UTC │ 03 Oct 25 19:40 UTC │
	│ image   │ embed-certs-327416 image list --format=json                                                                                                                                                                                                   │ embed-certs-327416           │ jenkins │ v1.37.0 │ 03 Oct 25 19:40 UTC │ 03 Oct 25 19:40 UTC │
	│ pause   │ -p embed-certs-327416 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-327416           │ jenkins │ v1.37.0 │ 03 Oct 25 19:40 UTC │                     │
	│ delete  │ -p embed-certs-327416                                                                                                                                                                                                                         │ embed-certs-327416           │ jenkins │ v1.37.0 │ 03 Oct 25 19:40 UTC │ 03 Oct 25 19:41 UTC │
	│ delete  │ -p embed-certs-327416                                                                                                                                                                                                                         │ embed-certs-327416           │ jenkins │ v1.37.0 │ 03 Oct 25 19:41 UTC │ 03 Oct 25 19:41 UTC │
	│ start   │ -p newest-cni-277907 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-277907            │ jenkins │ v1.37.0 │ 03 Oct 25 19:41 UTC │ 03 Oct 25 19:41 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-842797 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-842797 │ jenkins │ v1.37.0 │ 03 Oct 25 19:41 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-842797 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-842797 │ jenkins │ v1.37.0 │ 03 Oct 25 19:41 UTC │ 03 Oct 25 19:41 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-842797 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-842797 │ jenkins │ v1.37.0 │ 03 Oct 25 19:41 UTC │ 03 Oct 25 19:41 UTC │
	│ start   │ -p default-k8s-diff-port-842797 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-842797 │ jenkins │ v1.37.0 │ 03 Oct 25 19:41 UTC │ 03 Oct 25 19:42 UTC │
	│ addons  │ enable metrics-server -p newest-cni-277907 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-277907            │ jenkins │ v1.37.0 │ 03 Oct 25 19:41 UTC │                     │
	│ stop    │ -p newest-cni-277907 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-277907            │ jenkins │ v1.37.0 │ 03 Oct 25 19:41 UTC │ 03 Oct 25 19:41 UTC │
	│ addons  │ enable dashboard -p newest-cni-277907 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-277907            │ jenkins │ v1.37.0 │ 03 Oct 25 19:41 UTC │ 03 Oct 25 19:41 UTC │
	│ start   │ -p newest-cni-277907 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-277907            │ jenkins │ v1.37.0 │ 03 Oct 25 19:41 UTC │ 03 Oct 25 19:42 UTC │
	│ image   │ newest-cni-277907 image list --format=json                                                                                                                                                                                                    │ newest-cni-277907            │ jenkins │ v1.37.0 │ 03 Oct 25 19:42 UTC │ 03 Oct 25 19:42 UTC │
	│ pause   │ -p newest-cni-277907 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-277907            │ jenkins │ v1.37.0 │ 03 Oct 25 19:42 UTC │                     │
	│ delete  │ -p newest-cni-277907                                                                                                                                                                                                                          │ newest-cni-277907            │ jenkins │ v1.37.0 │ 03 Oct 25 19:42 UTC │ 03 Oct 25 19:42 UTC │
	│ delete  │ -p newest-cni-277907                                                                                                                                                                                                                          │ newest-cni-277907            │ jenkins │ v1.37.0 │ 03 Oct 25 19:42 UTC │ 03 Oct 25 19:42 UTC │
	│ start   │ -p auto-388132 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-388132                  │ jenkins │ v1.37.0 │ 03 Oct 25 19:42 UTC │                     │
	│ image   │ default-k8s-diff-port-842797 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-842797 │ jenkins │ v1.37.0 │ 03 Oct 25 19:42 UTC │ 03 Oct 25 19:42 UTC │
	│ pause   │ -p default-k8s-diff-port-842797 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-842797 │ jenkins │ v1.37.0 │ 03 Oct 25 19:42 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/03 19:42:17
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 19:42:17.786389  499560 out.go:360] Setting OutFile to fd 1 ...
	I1003 19:42:17.786718  499560 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 19:42:17.786733  499560 out.go:374] Setting ErrFile to fd 2...
	I1003 19:42:17.786738  499560 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 19:42:17.787042  499560 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-284583/.minikube/bin
	I1003 19:42:17.787535  499560 out.go:368] Setting JSON to false
	I1003 19:42:17.789306  499560 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8689,"bootTime":1759511849,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1003 19:42:17.789379  499560 start.go:140] virtualization:  
	I1003 19:42:17.793178  499560 out.go:179] * [auto-388132] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1003 19:42:17.797436  499560 out.go:179]   - MINIKUBE_LOCATION=21625
	I1003 19:42:17.797558  499560 notify.go:220] Checking for updates...
	I1003 19:42:17.803658  499560 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 19:42:17.806749  499560 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21625-284583/kubeconfig
	I1003 19:42:17.809874  499560 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-284583/.minikube
	I1003 19:42:17.812884  499560 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1003 19:42:17.815791  499560 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 19:42:17.819344  499560 config.go:182] Loaded profile config "default-k8s-diff-port-842797": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 19:42:17.819459  499560 driver.go:421] Setting default libvirt URI to qemu:///system
	I1003 19:42:17.850509  499560 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1003 19:42:17.850692  499560 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 19:42:17.913172  499560 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-03 19:42:17.903903584 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1003 19:42:17.913284  499560 docker.go:318] overlay module found
	I1003 19:42:17.916621  499560 out.go:179] * Using the docker driver based on user configuration
	I1003 19:42:17.919537  499560 start.go:304] selected driver: docker
	I1003 19:42:17.919562  499560 start.go:924] validating driver "docker" against <nil>
	I1003 19:42:17.919576  499560 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 19:42:17.920305  499560 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 19:42:17.975057  499560 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-03 19:42:17.965099423 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1003 19:42:17.975214  499560 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1003 19:42:17.975443  499560 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1003 19:42:17.978318  499560 out.go:179] * Using Docker driver with root privileges
	I1003 19:42:17.981193  499560 cni.go:84] Creating CNI manager for ""
	I1003 19:42:17.981260  499560 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 19:42:17.981271  499560 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1003 19:42:17.981364  499560 start.go:348] cluster config:
	{Name:auto-388132 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-388132 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1003 19:42:17.984580  499560 out.go:179] * Starting "auto-388132" primary control-plane node in "auto-388132" cluster
	I1003 19:42:17.987538  499560 cache.go:123] Beginning downloading kic base image for docker with crio
	I1003 19:42:17.990537  499560 out.go:179] * Pulling base image v0.0.48-1759382731-21643 ...
	I1003 19:42:17.993439  499560 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 19:42:17.993497  499560 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21625-284583/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1003 19:42:17.993514  499560 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1003 19:42:17.993524  499560 cache.go:58] Caching tarball of preloaded images
	I1003 19:42:17.993606  499560 preload.go:233] Found /home/jenkins/minikube-integration/21625-284583/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1003 19:42:17.993618  499560 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1003 19:42:17.993753  499560 profile.go:143] Saving config to /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/auto-388132/config.json ...
	I1003 19:42:17.993773  499560 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/auto-388132/config.json: {Name:mk52f18c750ffc1bdc804c16ea0e659fba654944 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1003 19:42:18.016603  499560 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon, skipping pull
	I1003 19:42:18.016658  499560 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in daemon, skipping load
	I1003 19:42:18.016679  499560 cache.go:232] Successfully downloaded all kic artifacts
	I1003 19:42:18.016707  499560 start.go:360] acquireMachinesLock for auto-388132: {Name:mk482e213d3b646dc96ebdd1779b41e5389cb65b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1003 19:42:18.016919  499560 start.go:364] duration metric: took 141.664µs to acquireMachinesLock for "auto-388132"
	I1003 19:42:18.016958  499560 start.go:93] Provisioning new machine with config: &{Name:auto-388132 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-388132 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1003 19:42:18.017039  499560 start.go:125] createHost starting for "" (driver="docker")
	I1003 19:42:18.020531  499560 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1003 19:42:18.020857  499560 start.go:159] libmachine.API.Create for "auto-388132" (driver="docker")
	I1003 19:42:18.020913  499560 client.go:168] LocalClient.Create starting
	I1003 19:42:18.020998  499560 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem
	I1003 19:42:18.021039  499560 main.go:141] libmachine: Decoding PEM data...
	I1003 19:42:18.021055  499560 main.go:141] libmachine: Parsing certificate...
	I1003 19:42:18.021127  499560 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem
	I1003 19:42:18.021154  499560 main.go:141] libmachine: Decoding PEM data...
	I1003 19:42:18.021166  499560 main.go:141] libmachine: Parsing certificate...
	I1003 19:42:18.021545  499560 cli_runner.go:164] Run: docker network inspect auto-388132 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1003 19:42:18.039694  499560 cli_runner.go:211] docker network inspect auto-388132 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1003 19:42:18.039813  499560 network_create.go:284] running [docker network inspect auto-388132] to gather additional debugging logs...
	I1003 19:42:18.039835  499560 cli_runner.go:164] Run: docker network inspect auto-388132
	W1003 19:42:18.059275  499560 cli_runner.go:211] docker network inspect auto-388132 returned with exit code 1
	I1003 19:42:18.059315  499560 network_create.go:287] error running [docker network inspect auto-388132]: docker network inspect auto-388132: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-388132 not found
	I1003 19:42:18.059330  499560 network_create.go:289] output of [docker network inspect auto-388132]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-388132 not found
	
	** /stderr **
	I1003 19:42:18.059422  499560 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1003 19:42:18.077571  499560 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-3a8a28910ba8 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6e:7a:d0:f8:54:63} reservation:<nil>}
	I1003 19:42:18.077954  499560 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-157403cbb468 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:8a:ee:cb:12:bf:d0} reservation:<nil>}
	I1003 19:42:18.078194  499560 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-8d1e24f7a986 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:9e:1b:b1:d8:1a:13} reservation:<nil>}
	I1003 19:42:18.078507  499560 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-b6308a07ab66 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:16:65:a0:2b:76:e2} reservation:<nil>}
	I1003 19:42:18.078931  499560 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019d15a0}
	I1003 19:42:18.078977  499560 network_create.go:124] attempt to create docker network auto-388132 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1003 19:42:18.079046  499560 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-388132 auto-388132
	I1003 19:42:18.151940  499560 network_create.go:108] docker network auto-388132 192.168.85.0/24 created
	I1003 19:42:18.151976  499560 kic.go:121] calculated static IP "192.168.85.2" for the "auto-388132" container
	I1003 19:42:18.152056  499560 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1003 19:42:18.169001  499560 cli_runner.go:164] Run: docker volume create auto-388132 --label name.minikube.sigs.k8s.io=auto-388132 --label created_by.minikube.sigs.k8s.io=true
	I1003 19:42:18.188521  499560 oci.go:103] Successfully created a docker volume auto-388132
	I1003 19:42:18.188603  499560 cli_runner.go:164] Run: docker run --rm --name auto-388132-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-388132 --entrypoint /usr/bin/test -v auto-388132:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -d /var/lib
	I1003 19:42:18.695652  499560 oci.go:107] Successfully prepared a docker volume auto-388132
	I1003 19:42:18.695705  499560 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 19:42:18.695726  499560 kic.go:194] Starting extracting preloaded images to volume ...
	I1003 19:42:18.695796  499560 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21625-284583/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-388132:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir
	I1003 19:42:23.051044  499560 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21625-284583/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-388132:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d -I lz4 -xf /preloaded.tar -C /extractDir: (4.355196923s)
	I1003 19:42:23.051086  499560 kic.go:203] duration metric: took 4.355356761s to extract preloaded images to volume ...
	W1003 19:42:23.051260  499560 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1003 19:42:23.051380  499560 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1003 19:42:23.110704  499560 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-388132 --name auto-388132 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-388132 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-388132 --network auto-388132 --ip 192.168.85.2 --volume auto-388132:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d
	I1003 19:42:23.484247  499560 cli_runner.go:164] Run: docker container inspect auto-388132 --format={{.State.Running}}
	I1003 19:42:23.515260  499560 cli_runner.go:164] Run: docker container inspect auto-388132 --format={{.State.Status}}
	I1003 19:42:23.565497  499560 cli_runner.go:164] Run: docker exec auto-388132 stat /var/lib/dpkg/alternatives/iptables
	I1003 19:42:23.659617  499560 oci.go:144] the created container "auto-388132" has a running status.
	I1003 19:42:23.659658  499560 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21625-284583/.minikube/machines/auto-388132/id_rsa...
	I1003 19:42:24.000770  499560 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21625-284583/.minikube/machines/auto-388132/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1003 19:42:24.041942  499560 cli_runner.go:164] Run: docker container inspect auto-388132 --format={{.State.Status}}
	I1003 19:42:24.079664  499560 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1003 19:42:24.079687  499560 kic_runner.go:114] Args: [docker exec --privileged auto-388132 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1003 19:42:24.144044  499560 cli_runner.go:164] Run: docker container inspect auto-388132 --format={{.State.Status}}
	I1003 19:42:24.177459  499560 machine.go:93] provisionDockerMachine start ...
	I1003 19:42:24.177545  499560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-388132
	I1003 19:42:24.200738  499560 main.go:141] libmachine: Using SSH client type: native
	I1003 19:42:24.201083  499560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I1003 19:42:24.201094  499560 main.go:141] libmachine: About to run SSH command:
	hostname
	I1003 19:42:24.201824  499560 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35524->127.0.0.1:33468: read: connection reset by peer
	I1003 19:42:27.348815  499560 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-388132
	
	I1003 19:42:27.348838  499560 ubuntu.go:182] provisioning hostname "auto-388132"
	I1003 19:42:27.348902  499560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-388132
	I1003 19:42:27.372142  499560 main.go:141] libmachine: Using SSH client type: native
	I1003 19:42:27.372453  499560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I1003 19:42:27.372464  499560 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-388132 && echo "auto-388132" | sudo tee /etc/hostname
	I1003 19:42:27.535903  499560 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-388132
	
	I1003 19:42:27.535983  499560 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-388132
	I1003 19:42:27.558672  499560 main.go:141] libmachine: Using SSH client type: native
	I1003 19:42:27.559006  499560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I1003 19:42:27.559027  499560 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-388132' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-388132/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-388132' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1003 19:42:27.725536  499560 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1003 19:42:27.725569  499560 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21625-284583/.minikube CaCertPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21625-284583/.minikube}
	I1003 19:42:27.725592  499560 ubuntu.go:190] setting up certificates
	I1003 19:42:27.725601  499560 provision.go:84] configureAuth start
	I1003 19:42:27.725659  499560 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-388132
	I1003 19:42:27.758033  499560 provision.go:143] copyHostCerts
	I1003 19:42:27.758098  499560 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-284583/.minikube/ca.pem, removing ...
	I1003 19:42:27.758107  499560 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-284583/.minikube/ca.pem
	I1003 19:42:27.758182  499560 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21625-284583/.minikube/ca.pem (1082 bytes)
	I1003 19:42:27.758274  499560 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-284583/.minikube/cert.pem, removing ...
	I1003 19:42:27.758279  499560 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-284583/.minikube/cert.pem
	I1003 19:42:27.758306  499560 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21625-284583/.minikube/cert.pem (1123 bytes)
	I1003 19:42:27.758362  499560 exec_runner.go:144] found /home/jenkins/minikube-integration/21625-284583/.minikube/key.pem, removing ...
	I1003 19:42:27.758367  499560 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21625-284583/.minikube/key.pem
	I1003 19:42:27.758389  499560 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21625-284583/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21625-284583/.minikube/key.pem (1675 bytes)
	I1003 19:42:27.758434  499560 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21625-284583/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21625-284583/.minikube/certs/ca-key.pem org=jenkins.auto-388132 san=[127.0.0.1 192.168.85.2 auto-388132 localhost minikube]
	
	
	==> CRI-O <==
	Oct 03 19:42:07 default-k8s-diff-port-842797 crio[653]: time="2025-10-03T19:42:07.438582054Z" level=info msg="Started container" PID=1646 containerID=fcd9a4b62f08b140b979667d54c075893895688705f962511f443c0a62e2c87a description=kube-system/storage-provisioner/storage-provisioner id=8c283931-2add-42cd-bb20-f43852dab8b2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b2908953482167880b2083818a8908543f655fb150f84dd673e49afa7137a542
	Oct 03 19:42:13 default-k8s-diff-port-842797 crio[653]: time="2025-10-03T19:42:13.054966075Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=aad187d1-bcc8-45b8-b449-eca432fcd67a name=/runtime.v1.ImageService/ImageStatus
	Oct 03 19:42:13 default-k8s-diff-port-842797 crio[653]: time="2025-10-03T19:42:13.060666817Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=75c5bf42-3ac8-4e91-a2e5-80320eab6ce7 name=/runtime.v1.ImageService/ImageStatus
	Oct 03 19:42:13 default-k8s-diff-port-842797 crio[653]: time="2025-10-03T19:42:13.064018664Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-t69v5/dashboard-metrics-scraper" id=c1caf0eb-68de-4cd4-aa8c-c2c3eac67f38 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:42:13 default-k8s-diff-port-842797 crio[653]: time="2025-10-03T19:42:13.064405985Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 19:42:13 default-k8s-diff-port-842797 crio[653]: time="2025-10-03T19:42:13.079042161Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 19:42:13 default-k8s-diff-port-842797 crio[653]: time="2025-10-03T19:42:13.079784318Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 03 19:42:13 default-k8s-diff-port-842797 crio[653]: time="2025-10-03T19:42:13.1015431Z" level=info msg="Created container 65412dde6368e824654930cf8979c4db2cbf6850df87b339d97d58e62d902100: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-t69v5/dashboard-metrics-scraper" id=c1caf0eb-68de-4cd4-aa8c-c2c3eac67f38 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 03 19:42:13 default-k8s-diff-port-842797 crio[653]: time="2025-10-03T19:42:13.102525671Z" level=info msg="Starting container: 65412dde6368e824654930cf8979c4db2cbf6850df87b339d97d58e62d902100" id=634253c7-b52b-44ed-bdd9-2e4de4e18641 name=/runtime.v1.RuntimeService/StartContainer
	Oct 03 19:42:13 default-k8s-diff-port-842797 crio[653]: time="2025-10-03T19:42:13.104304661Z" level=info msg="Started container" PID=1680 containerID=65412dde6368e824654930cf8979c4db2cbf6850df87b339d97d58e62d902100 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-t69v5/dashboard-metrics-scraper id=634253c7-b52b-44ed-bdd9-2e4de4e18641 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8ebb5ab1cd3b43dabf22a7875fc258bb43da490419dfe7e452a2e0b58810bb4a
	Oct 03 19:42:13 default-k8s-diff-port-842797 crio[653]: time="2025-10-03T19:42:13.412238854Z" level=info msg="Removing container: cb6ba62a79524c022af13411f2d607a2158a3f64d4fcbae344b14b0c3f296a83" id=972176b2-ade1-42bc-864c-96273b0c8f3f name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 03 19:42:13 default-k8s-diff-port-842797 crio[653]: time="2025-10-03T19:42:13.423913047Z" level=info msg="Error loading conmon cgroup of container cb6ba62a79524c022af13411f2d607a2158a3f64d4fcbae344b14b0c3f296a83: cgroup deleted" id=972176b2-ade1-42bc-864c-96273b0c8f3f name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 03 19:42:13 default-k8s-diff-port-842797 crio[653]: time="2025-10-03T19:42:13.431128036Z" level=info msg="Removed container cb6ba62a79524c022af13411f2d607a2158a3f64d4fcbae344b14b0c3f296a83: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-t69v5/dashboard-metrics-scraper" id=972176b2-ade1-42bc-864c-96273b0c8f3f name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 03 19:42:16 default-k8s-diff-port-842797 crio[653]: time="2025-10-03T19:42:16.121141119Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 03 19:42:16 default-k8s-diff-port-842797 crio[653]: time="2025-10-03T19:42:16.124912074Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 03 19:42:16 default-k8s-diff-port-842797 crio[653]: time="2025-10-03T19:42:16.124950065Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 03 19:42:16 default-k8s-diff-port-842797 crio[653]: time="2025-10-03T19:42:16.124981359Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 03 19:42:16 default-k8s-diff-port-842797 crio[653]: time="2025-10-03T19:42:16.128345096Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 03 19:42:16 default-k8s-diff-port-842797 crio[653]: time="2025-10-03T19:42:16.128379156Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 03 19:42:16 default-k8s-diff-port-842797 crio[653]: time="2025-10-03T19:42:16.128401926Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 03 19:42:16 default-k8s-diff-port-842797 crio[653]: time="2025-10-03T19:42:16.136255653Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 03 19:42:16 default-k8s-diff-port-842797 crio[653]: time="2025-10-03T19:42:16.136291223Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 03 19:42:16 default-k8s-diff-port-842797 crio[653]: time="2025-10-03T19:42:16.136313032Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 03 19:42:16 default-k8s-diff-port-842797 crio[653]: time="2025-10-03T19:42:16.141041517Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 03 19:42:16 default-k8s-diff-port-842797 crio[653]: time="2025-10-03T19:42:16.141083864Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	65412dde6368e       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           16 seconds ago       Exited              dashboard-metrics-scraper   2                   8ebb5ab1cd3b4       dashboard-metrics-scraper-6ffb444bf9-t69v5             kubernetes-dashboard
	fcd9a4b62f08b       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           21 seconds ago       Running             storage-provisioner         2                   b290895348216       storage-provisioner                                    kube-system
	ebeaa951c920c       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   42 seconds ago       Running             kubernetes-dashboard        0                   437b1214d3710       kubernetes-dashboard-855c9754f9-ll25f                  kubernetes-dashboard
	82b814a25f1f5       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           53 seconds ago       Running             coredns                     1                   046b501bedc89       coredns-66bc5c9577-l8knz                               kube-system
	36a66515edb26       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           53 seconds ago       Running             kindnet-cni                 1                   22637bc751c87       kindnet-96q8s                                          kube-system
	ec43e1559e952       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           54 seconds ago       Running             busybox                     1                   87a991bd17a1c       busybox                                                default
	9c16c456853d8       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           54 seconds ago       Running             kube-proxy                  1                   041cadd896bc1       kube-proxy-gvslj                                       kube-system
	5a18bc974715b       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           54 seconds ago       Exited              storage-provisioner         1                   b290895348216       storage-provisioner                                    kube-system
	02535cb769088       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   eb1f316826838       kube-apiserver-default-k8s-diff-port-842797            kube-system
	72a3c6c093ee7       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   7e5d1aa095aac       kube-scheduler-default-k8s-diff-port-842797            kube-system
	95f720e182dbb       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   2e71251514078       etcd-default-k8s-diff-port-842797                      kube-system
	a6485da9cdb1c       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   91c55381e919a       kube-controller-manager-default-k8s-diff-port-842797   kube-system
	
	
	==> coredns [82b814a25f1f5e3ed6844334a8df2fe3ccfa2c194455da2c0e360c30e6aaca7e] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35110 - 39026 "HINFO IN 3196626566350724291.1470925459604482367. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015689967s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-842797
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-842797
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a43873c79fc22f8b1ccd29d3dfa635d392b09335
	                    minikube.k8s.io/name=default-k8s-diff-port-842797
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_03T19_40_05_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 03 Oct 2025 19:40:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-842797
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 03 Oct 2025 19:42:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 03 Oct 2025 19:42:04 +0000   Fri, 03 Oct 2025 19:39:55 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 03 Oct 2025 19:42:04 +0000   Fri, 03 Oct 2025 19:39:55 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 03 Oct 2025 19:42:04 +0000   Fri, 03 Oct 2025 19:39:55 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 03 Oct 2025 19:42:04 +0000   Fri, 03 Oct 2025 19:40:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-842797
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 6976841640c04eeba284ef295c75e540
	  System UUID:                0315913a-ac76-434b-8962-2420e3ad1d8e
	  Boot ID:                    3762136e-8bec-4104-a5cb-0b1976f6048e
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-66bc5c9577-l8knz                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m19s
	  kube-system                 etcd-default-k8s-diff-port-842797                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m24s
	  kube-system                 kindnet-96q8s                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m19s
	  kube-system                 kube-apiserver-default-k8s-diff-port-842797             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-842797    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 kube-proxy-gvslj                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m19s
	  kube-system                 kube-scheduler-default-k8s-diff-port-842797             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m17s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-t69v5              0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-ll25f                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m17s                  kube-proxy       
	  Normal   Starting                 51s                    kube-proxy       
	  Warning  CgroupV1                 2m36s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m36s (x8 over 2m36s)  kubelet          Node default-k8s-diff-port-842797 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m36s (x8 over 2m36s)  kubelet          Node default-k8s-diff-port-842797 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m36s (x8 over 2m36s)  kubelet          Node default-k8s-diff-port-842797 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m25s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m25s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m24s                  kubelet          Node default-k8s-diff-port-842797 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m24s                  kubelet          Node default-k8s-diff-port-842797 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m24s                  kubelet          Node default-k8s-diff-port-842797 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m20s                  node-controller  Node default-k8s-diff-port-842797 event: Registered Node default-k8s-diff-port-842797 in Controller
	  Normal   NodeReady                98s                    kubelet          Node default-k8s-diff-port-842797 status is now: NodeReady
	  Normal   Starting                 64s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 64s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  63s (x8 over 63s)      kubelet          Node default-k8s-diff-port-842797 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    63s (x8 over 63s)      kubelet          Node default-k8s-diff-port-842797 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     63s (x8 over 63s)      kubelet          Node default-k8s-diff-port-842797 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           50s                    node-controller  Node default-k8s-diff-port-842797 event: Registered Node default-k8s-diff-port-842797 in Controller
	
	
	==> dmesg <==
	[ +24.839009] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:13] overlayfs: idmapped layers are currently not supported
	[ +26.493253] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:15] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:16] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:17] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000010] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[Oct 3 19:18] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:20] overlayfs: idmapped layers are currently not supported
	[ +32.018892] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:22] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:24] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:26] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:32] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:34] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:35] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:36] overlayfs: idmapped layers are currently not supported
	[  +4.740983] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:38] overlayfs: idmapped layers are currently not supported
	[ +12.897300] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:39] overlayfs: idmapped layers are currently not supported
	[  +4.104516] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:41] overlayfs: idmapped layers are currently not supported
	[  +1.990678] overlayfs: idmapped layers are currently not supported
	[Oct 3 19:42] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [95f720e182dbb5dbc9ca0b55d30ef0869679c1087e3e87174822cffb7d42a5ea] <==
	{"level":"warn","ts":"2025-10-03T19:41:31.128494Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:41:31.177188Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:41:31.207636Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:41:31.230433Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:41:31.265824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:41:31.289977Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:41:31.332610Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:41:31.347487Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:41:31.377932Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:41:31.403330Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:41:31.493006Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:41:31.494483Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:41:31.521981Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:41:31.546959Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:41:31.584011Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:41:31.612023Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:41:31.647428Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:41:31.686740Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:41:31.720303Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:41:31.800754Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:41:31.878396Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:41:31.909423Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:41:31.933025Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:41:31.969600Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-03T19:41:32.072882Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40706","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 19:42:29 up  2:25,  0 user,  load average: 5.07, 3.88, 2.72
	Linux default-k8s-diff-port-842797 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [36a66515edb26dfbfcd4d2a7fd0c17ac0037c754a0f101544d32fe0f3d820b72] <==
	I1003 19:41:35.876299       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1003 19:41:35.876691       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1003 19:41:35.876850       1 main.go:148] setting mtu 1500 for CNI 
	I1003 19:41:35.876863       1 main.go:178] kindnetd IP family: "ipv4"
	I1003 19:41:35.876875       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-03T19:41:36Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1003 19:41:36.121230       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1003 19:41:36.121255       1 controller.go:381] "Waiting for informer caches to sync"
	I1003 19:41:36.121264       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1003 19:41:36.136982       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1003 19:42:06.123157       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1003 19:42:06.123333       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1003 19:42:06.123440       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1003 19:42:06.137802       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1003 19:42:07.722250       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1003 19:42:07.722287       1 metrics.go:72] Registering metrics
	I1003 19:42:07.722354       1 controller.go:711] "Syncing nftables rules"
	I1003 19:42:16.120820       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1003 19:42:16.120875       1 main.go:301] handling current node
	I1003 19:42:26.120631       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1003 19:42:26.120667       1 main.go:301] handling current node
	
	
	==> kube-apiserver [02535cb7690885e90adcc200c551315486edf2d6f1bb2cbd015e185c373fe0c2] <==
	I1003 19:41:34.174415       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1003 19:41:34.174423       1 cache.go:39] Caches are synced for autoregister controller
	I1003 19:41:34.207834       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1003 19:41:34.229023       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1003 19:41:34.229134       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1003 19:41:34.229179       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1003 19:41:34.229232       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1003 19:41:34.229239       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1003 19:41:34.229321       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1003 19:41:34.229351       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1003 19:41:34.280884       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1003 19:41:34.280913       1 policy_source.go:240] refreshing policies
	I1003 19:41:34.304961       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1003 19:41:34.334894       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1003 19:41:34.409379       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1003 19:41:34.466172       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1003 19:41:36.744242       1 controller.go:667] quota admission added evaluator for: namespaces
	I1003 19:41:36.928046       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1003 19:41:37.124773       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1003 19:41:37.188829       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1003 19:41:37.518400       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.130.95"}
	I1003 19:41:37.534673       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.41.173"}
	I1003 19:41:39.625024       1 controller.go:667] quota admission added evaluator for: endpoints
	I1003 19:41:39.774074       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1003 19:41:39.825525       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [a6485da9cdb1c66096d6663ef94b1c675b5cc8904328eba3b2537fa5c260cdba] <==
	I1003 19:41:39.204890       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1003 19:41:39.204961       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-842797"
	I1003 19:41:39.205016       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1003 19:41:39.206588       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1003 19:41:39.208918       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1003 19:41:39.209059       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1003 19:41:39.216636       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1003 19:41:39.218315       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1003 19:41:39.218391       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1003 19:41:39.218601       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1003 19:41:39.218912       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1003 19:41:39.219129       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1003 19:41:39.219153       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1003 19:41:39.241216       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1003 19:41:39.243573       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1003 19:41:39.247862       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1003 19:41:39.255281       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1003 19:41:39.257743       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1003 19:41:39.263468       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1003 19:41:39.269871       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1003 19:41:39.269921       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1003 19:41:39.274322       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1003 19:41:39.274356       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1003 19:41:39.274364       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1003 19:41:39.289295       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [9c16c456853d85ab9638243feba1261a05c0a8c713822477310b074fb4eb4723] <==
	I1003 19:41:37.435345       1 server_linux.go:53] "Using iptables proxy"
	I1003 19:41:37.713867       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1003 19:41:37.838243       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1003 19:41:37.838360       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1003 19:41:37.838479       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1003 19:41:38.070953       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1003 19:41:38.071070       1 server_linux.go:132] "Using iptables Proxier"
	I1003 19:41:38.219743       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1003 19:41:38.220135       1 server.go:527] "Version info" version="v1.34.1"
	I1003 19:41:38.220199       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1003 19:41:38.222665       1 config.go:200] "Starting service config controller"
	I1003 19:41:38.222759       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1003 19:41:38.222805       1 config.go:106] "Starting endpoint slice config controller"
	I1003 19:41:38.222833       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1003 19:41:38.222876       1 config.go:403] "Starting serviceCIDR config controller"
	I1003 19:41:38.222903       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1003 19:41:38.227603       1 config.go:309] "Starting node config controller"
	I1003 19:41:38.232186       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1003 19:41:38.232241       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1003 19:41:38.323006       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1003 19:41:38.323018       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1003 19:41:38.327049       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [72a3c6c093ee7526caa8d968d0ef1b63f258556b89c398a06f6b15295b410635] <==
	I1003 19:41:31.172582       1 serving.go:386] Generated self-signed cert in-memory
	I1003 19:41:35.988704       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1003 19:41:36.007286       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1003 19:41:36.092424       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1003 19:41:36.092529       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1003 19:41:36.092551       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1003 19:41:36.092583       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1003 19:41:36.107743       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1003 19:41:36.107768       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1003 19:41:36.107787       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1003 19:41:36.107793       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1003 19:41:36.312453       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1003 19:41:36.312517       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1003 19:41:36.336348       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Oct 03 19:41:40 default-k8s-diff-port-842797 kubelet[782]: I1003 19:41:40.078497     782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/5b3fda27-6d63-4fd1-8e59-407c16cc358b-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-ll25f\" (UID: \"5b3fda27-6d63-4fd1-8e59-407c16cc358b\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-ll25f"
	Oct 03 19:41:40 default-k8s-diff-port-842797 kubelet[782]: I1003 19:41:40.078607     782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jvm2\" (UniqueName: \"kubernetes.io/projected/a9372a39-bc89-4ed9-8bd1-c11c31755813-kube-api-access-2jvm2\") pod \"dashboard-metrics-scraper-6ffb444bf9-t69v5\" (UID: \"a9372a39-bc89-4ed9-8bd1-c11c31755813\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-t69v5"
	Oct 03 19:41:40 default-k8s-diff-port-842797 kubelet[782]: I1003 19:41:40.078772     782 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2j9p\" (UniqueName: \"kubernetes.io/projected/5b3fda27-6d63-4fd1-8e59-407c16cc358b-kube-api-access-x2j9p\") pod \"kubernetes-dashboard-855c9754f9-ll25f\" (UID: \"5b3fda27-6d63-4fd1-8e59-407c16cc358b\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-ll25f"
	Oct 03 19:41:40 default-k8s-diff-port-842797 kubelet[782]: I1003 19:41:40.109121     782 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 03 19:41:40 default-k8s-diff-port-842797 kubelet[782]: W1003 19:41:40.291413     782 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/dd1cbce823c3c68d280f6d6431457674ab5e928f19effd4b41908fc33cc07deb/crio-437b1214d371075f22c6ded412c43182d71a054a09a13a567185d722f8876c7b WatchSource:0}: Error finding container 437b1214d371075f22c6ded412c43182d71a054a09a13a567185d722f8876c7b: Status 404 returned error can't find the container with id 437b1214d371075f22c6ded412c43182d71a054a09a13a567185d722f8876c7b
	Oct 03 19:41:40 default-k8s-diff-port-842797 kubelet[782]: W1003 19:41:40.292199     782 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/dd1cbce823c3c68d280f6d6431457674ab5e928f19effd4b41908fc33cc07deb/crio-8ebb5ab1cd3b43dabf22a7875fc258bb43da490419dfe7e452a2e0b58810bb4a WatchSource:0}: Error finding container 8ebb5ab1cd3b43dabf22a7875fc258bb43da490419dfe7e452a2e0b58810bb4a: Status 404 returned error can't find the container with id 8ebb5ab1cd3b43dabf22a7875fc258bb43da490419dfe7e452a2e0b58810bb4a
	Oct 03 19:41:54 default-k8s-diff-port-842797 kubelet[782]: I1003 19:41:54.336478     782 scope.go:117] "RemoveContainer" containerID="eb5107aeafecd8279566dfc81100d4a288c2d56be8ce3bffcc2e790d76c13a76"
	Oct 03 19:41:54 default-k8s-diff-port-842797 kubelet[782]: I1003 19:41:54.374590     782 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-ll25f" podStartSLOduration=8.542529471 podStartE2EDuration="15.374569239s" podCreationTimestamp="2025-10-03 19:41:39 +0000 UTC" firstStartedPulling="2025-10-03 19:41:40.294688102 +0000 UTC m=+14.587987913" lastFinishedPulling="2025-10-03 19:41:47.12672787 +0000 UTC m=+21.420027681" observedRunningTime="2025-10-03 19:41:47.351146298 +0000 UTC m=+21.644446108" watchObservedRunningTime="2025-10-03 19:41:54.374569239 +0000 UTC m=+28.667869058"
	Oct 03 19:41:55 default-k8s-diff-port-842797 kubelet[782]: I1003 19:41:55.340958     782 scope.go:117] "RemoveContainer" containerID="eb5107aeafecd8279566dfc81100d4a288c2d56be8ce3bffcc2e790d76c13a76"
	Oct 03 19:41:55 default-k8s-diff-port-842797 kubelet[782]: I1003 19:41:55.341773     782 scope.go:117] "RemoveContainer" containerID="cb6ba62a79524c022af13411f2d607a2158a3f64d4fcbae344b14b0c3f296a83"
	Oct 03 19:41:55 default-k8s-diff-port-842797 kubelet[782]: E1003 19:41:55.342062     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-t69v5_kubernetes-dashboard(a9372a39-bc89-4ed9-8bd1-c11c31755813)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-t69v5" podUID="a9372a39-bc89-4ed9-8bd1-c11c31755813"
	Oct 03 19:41:56 default-k8s-diff-port-842797 kubelet[782]: I1003 19:41:56.344798     782 scope.go:117] "RemoveContainer" containerID="cb6ba62a79524c022af13411f2d607a2158a3f64d4fcbae344b14b0c3f296a83"
	Oct 03 19:41:56 default-k8s-diff-port-842797 kubelet[782]: E1003 19:41:56.344955     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-t69v5_kubernetes-dashboard(a9372a39-bc89-4ed9-8bd1-c11c31755813)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-t69v5" podUID="a9372a39-bc89-4ed9-8bd1-c11c31755813"
	Oct 03 19:42:00 default-k8s-diff-port-842797 kubelet[782]: I1003 19:42:00.208490     782 scope.go:117] "RemoveContainer" containerID="cb6ba62a79524c022af13411f2d607a2158a3f64d4fcbae344b14b0c3f296a83"
	Oct 03 19:42:00 default-k8s-diff-port-842797 kubelet[782]: E1003 19:42:00.211381     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-t69v5_kubernetes-dashboard(a9372a39-bc89-4ed9-8bd1-c11c31755813)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-t69v5" podUID="a9372a39-bc89-4ed9-8bd1-c11c31755813"
	Oct 03 19:42:07 default-k8s-diff-port-842797 kubelet[782]: I1003 19:42:07.374470     782 scope.go:117] "RemoveContainer" containerID="5a18bc974715b940aec68811c77d1e74f00fd5e65c2098ece1b868b46c87fb02"
	Oct 03 19:42:13 default-k8s-diff-port-842797 kubelet[782]: I1003 19:42:13.053998     782 scope.go:117] "RemoveContainer" containerID="cb6ba62a79524c022af13411f2d607a2158a3f64d4fcbae344b14b0c3f296a83"
	Oct 03 19:42:13 default-k8s-diff-port-842797 kubelet[782]: I1003 19:42:13.400796     782 scope.go:117] "RemoveContainer" containerID="cb6ba62a79524c022af13411f2d607a2158a3f64d4fcbae344b14b0c3f296a83"
	Oct 03 19:42:13 default-k8s-diff-port-842797 kubelet[782]: I1003 19:42:13.401115     782 scope.go:117] "RemoveContainer" containerID="65412dde6368e824654930cf8979c4db2cbf6850df87b339d97d58e62d902100"
	Oct 03 19:42:13 default-k8s-diff-port-842797 kubelet[782]: E1003 19:42:13.401270     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-t69v5_kubernetes-dashboard(a9372a39-bc89-4ed9-8bd1-c11c31755813)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-t69v5" podUID="a9372a39-bc89-4ed9-8bd1-c11c31755813"
	Oct 03 19:42:20 default-k8s-diff-port-842797 kubelet[782]: I1003 19:42:20.207241     782 scope.go:117] "RemoveContainer" containerID="65412dde6368e824654930cf8979c4db2cbf6850df87b339d97d58e62d902100"
	Oct 03 19:42:20 default-k8s-diff-port-842797 kubelet[782]: E1003 19:42:20.207429     782 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-t69v5_kubernetes-dashboard(a9372a39-bc89-4ed9-8bd1-c11c31755813)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-t69v5" podUID="a9372a39-bc89-4ed9-8bd1-c11c31755813"
	Oct 03 19:42:24 default-k8s-diff-port-842797 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 03 19:42:24 default-k8s-diff-port-842797 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 03 19:42:24 default-k8s-diff-port-842797 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [ebeaa951c920c3a9a1c23debb610071301437a5219a14cd30b8336ab848dfff9] <==
	2025/10/03 19:41:47 Using namespace: kubernetes-dashboard
	2025/10/03 19:41:47 Using in-cluster config to connect to apiserver
	2025/10/03 19:41:47 Using secret token for csrf signing
	2025/10/03 19:41:47 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/03 19:41:47 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/03 19:41:47 Successful initial request to the apiserver, version: v1.34.1
	2025/10/03 19:41:47 Generating JWE encryption key
	2025/10/03 19:41:47 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/03 19:41:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/03 19:41:48 Initializing JWE encryption key from synchronized object
	2025/10/03 19:41:48 Creating in-cluster Sidecar client
	2025/10/03 19:41:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/03 19:41:48 Serving insecurely on HTTP port: 9090
	2025/10/03 19:42:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/03 19:41:47 Starting overwatch
	
	
	==> storage-provisioner [5a18bc974715b940aec68811c77d1e74f00fd5e65c2098ece1b868b46c87fb02] <==
	I1003 19:41:36.142438       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1003 19:42:06.366195       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [fcd9a4b62f08b140b979667d54c075893895688705f962511f443c0a62e2c87a] <==
	I1003 19:42:07.471885       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1003 19:42:07.498409       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1003 19:42:07.498555       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1003 19:42:07.504945       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:42:10.959756       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:42:15.220193       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:42:18.818606       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:42:21.872455       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:42:24.894647       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:42:24.900217       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1003 19:42:24.900363       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1003 19:42:24.900519       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-842797_75d7bf1b-5163-46dc-b538-e984e14535b7!
	I1003 19:42:24.900816       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cf2e8791-f5d6-4403-8f58-225b6bccc9d1", APIVersion:"v1", ResourceVersion:"680", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-842797_75d7bf1b-5163-46dc-b538-e984e14535b7 became leader
	W1003 19:42:24.914286       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:42:24.925416       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1003 19:42:25.001030       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-842797_75d7bf1b-5163-46dc-b538-e984e14535b7!
	W1003 19:42:26.930175       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:42:26.936074       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:42:28.943418       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1003 19:42:28.957047       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-842797 -n default-k8s-diff-port-842797
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-842797 -n default-k8s-diff-port-842797: exit status 2 (440.282538ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-842797 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (7.16s)
E1003 19:48:36.603005  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/flannel-388132/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 19:48:36.609393  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/flannel-388132/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 19:48:36.620793  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/flannel-388132/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 19:48:36.642207  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/flannel-388132/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 19:48:36.683705  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/flannel-388132/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 19:48:36.765182  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/flannel-388132/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 19:48:36.926799  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/flannel-388132/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 19:48:37.248675  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/flannel-388132/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 19:48:37.890645  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/flannel-388132/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 19:48:38.521004  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/default-k8s-diff-port-842797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 19:48:39.172879  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/flannel-388132/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 19:48:41.734153  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/flannel-388132/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 19:48:46.290155  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/auto-388132/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 19:48:46.296499  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/auto-388132/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 19:48:46.307832  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/auto-388132/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 19:48:46.329230  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/auto-388132/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 19:48:46.370738  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/auto-388132/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 19:48:46.452240  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/auto-388132/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 19:48:46.614103  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/auto-388132/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 19:48:46.855817  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/flannel-388132/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 19:48:46.936217  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/auto-388132/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 19:48:47.578072  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/auto-388132/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 19:48:48.859763  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/auto-388132/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 19:48:51.421125  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/auto-388132/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 19:48:56.542491  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/auto-388132/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 19:48:57.097127  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/flannel-388132/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 19:49:06.785171  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/auto-388132/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test pass (258/326)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 6.18
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.13
9 TestDownloadOnly/v1.28.0/DeleteAll 0.32
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 5.83
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.09
18 TestDownloadOnly/v1.34.1/DeleteAll 0.22
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.61
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 171.27
31 TestAddons/serial/GCPAuth/Namespaces 0.23
32 TestAddons/serial/GCPAuth/FakeCredentials 8.92
48 TestAddons/StoppedEnableDisable 12.28
49 TestCertOptions 39.74
50 TestCertExpiration 232.88
58 TestErrorSpam/setup 35.52
59 TestErrorSpam/start 0.81
60 TestErrorSpam/status 1.05
61 TestErrorSpam/pause 5.9
62 TestErrorSpam/unpause 5.43
63 TestErrorSpam/stop 1.43
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 79.16
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 42.14
70 TestFunctional/serial/KubeContext 0.06
71 TestFunctional/serial/KubectlGetPods 0.1
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.52
75 TestFunctional/serial/CacheCmd/cache/add_local 1.07
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.79
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.14
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
83 TestFunctional/serial/ExtraConfig 33.5
84 TestFunctional/serial/ComponentHealth 0.1
85 TestFunctional/serial/LogsCmd 1.44
86 TestFunctional/serial/LogsFileCmd 1.65
87 TestFunctional/serial/InvalidService 4.11
89 TestFunctional/parallel/ConfigCmd 0.47
90 TestFunctional/parallel/DashboardCmd 9.34
91 TestFunctional/parallel/DryRun 0.47
92 TestFunctional/parallel/InternationalLanguage 0.21
93 TestFunctional/parallel/StatusCmd 1.04
98 TestFunctional/parallel/AddonsCmd 0.21
99 TestFunctional/parallel/PersistentVolumeClaim 25.61
101 TestFunctional/parallel/SSHCmd 0.76
102 TestFunctional/parallel/CpCmd 2.45
104 TestFunctional/parallel/FileSync 0.37
105 TestFunctional/parallel/CertSync 2.59
109 TestFunctional/parallel/NodeLabels 0.11
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.87
113 TestFunctional/parallel/License 0.35
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.7
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.43
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.13
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
127 TestFunctional/parallel/ProfileCmd/profile_list 0.43
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.43
129 TestFunctional/parallel/MountCmd/any-port 7.73
130 TestFunctional/parallel/MountCmd/specific-port 1.7
131 TestFunctional/parallel/MountCmd/VerifyCleanup 1.3
132 TestFunctional/parallel/ServiceCmd/List 0.64
133 TestFunctional/parallel/ServiceCmd/JSONOutput 1.42
137 TestFunctional/parallel/Version/short 0.07
138 TestFunctional/parallel/Version/components 1.31
139 TestFunctional/parallel/ImageCommands/ImageListShort 0.28
140 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
141 TestFunctional/parallel/ImageCommands/ImageListJson 0.29
142 TestFunctional/parallel/ImageCommands/ImageListYaml 0.26
143 TestFunctional/parallel/ImageCommands/ImageBuild 3.87
144 TestFunctional/parallel/ImageCommands/Setup 0.77
145 TestFunctional/parallel/UpdateContextCmd/no_changes 0.22
146 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.19
147 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.23
152 TestFunctional/parallel/ImageCommands/ImageRemove 0.56
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 203.94
163 TestMultiControlPlane/serial/DeployApp 6.83
164 TestMultiControlPlane/serial/PingHostFromPods 1.48
165 TestMultiControlPlane/serial/AddWorkerNode 60.42
166 TestMultiControlPlane/serial/NodeLabels 0.11
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.05
168 TestMultiControlPlane/serial/CopyFile 19.32
169 TestMultiControlPlane/serial/StopSecondaryNode 12.77
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.76
171 TestMultiControlPlane/serial/RestartSecondaryNode 32.29
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.22
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 118.58
174 TestMultiControlPlane/serial/DeleteSecondaryNode 10.88
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.78
176 TestMultiControlPlane/serial/StopCluster 35.71
177 TestMultiControlPlane/serial/RestartCluster 162.8
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.76
179 TestMultiControlPlane/serial/AddSecondaryNode 80.79
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.05
184 TestJSONOutput/start/Command 82.71
185 TestJSONOutput/start/Audit 0
187 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Audit 0
193 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Audit 0
199 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/stop/Command 5.73
203 TestJSONOutput/stop/Audit 0
205 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
207 TestErrorJSONOutput 0.25
209 TestKicCustomNetwork/create_custom_network 39.3
210 TestKicCustomNetwork/use_default_bridge_network 35.3
211 TestKicExistingNetwork 34.27
212 TestKicCustomSubnet 31.68
213 TestKicStaticIP 37.66
214 TestMainNoArgs 0.05
215 TestMinikubeProfile 71.83
218 TestMountStart/serial/StartWithMountFirst 9.82
219 TestMountStart/serial/VerifyMountFirst 0.27
220 TestMountStart/serial/StartWithMountSecond 9.16
221 TestMountStart/serial/VerifyMountSecond 0.27
222 TestMountStart/serial/DeleteFirst 1.62
223 TestMountStart/serial/VerifyMountPostDelete 0.29
224 TestMountStart/serial/Stop 1.22
225 TestMountStart/serial/RestartStopped 7.85
226 TestMountStart/serial/VerifyMountPostStop 0.27
229 TestMultiNode/serial/FreshStart2Nodes 136.26
230 TestMultiNode/serial/DeployApp2Nodes 4.95
231 TestMultiNode/serial/PingHostFrom2Pods 0.9
232 TestMultiNode/serial/AddNode 58.72
233 TestMultiNode/serial/MultiNodeLabels 0.09
234 TestMultiNode/serial/ProfileList 0.69
235 TestMultiNode/serial/CopyFile 10.23
236 TestMultiNode/serial/StopNode 2.31
237 TestMultiNode/serial/StartAfterStop 7.99
238 TestMultiNode/serial/RestartKeepsNodes 78.32
239 TestMultiNode/serial/DeleteNode 5.65
240 TestMultiNode/serial/StopMultiNode 23.98
241 TestMultiNode/serial/RestartMultiNode 52.12
242 TestMultiNode/serial/ValidateNameConflict 36.55
247 TestPreload 125.75
249 TestScheduledStopUnix 112.04
252 TestInsufficientStorage 13.13
253 TestRunningBinaryUpgrade 56.79
255 TestKubernetesUpgrade 356.35
256 TestMissingContainerUpgrade 119.3
258 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
259 TestNoKubernetes/serial/StartWithK8s 49.43
260 TestNoKubernetes/serial/StartWithStopK8s 34.78
261 TestNoKubernetes/serial/Start 9.99
262 TestNoKubernetes/serial/VerifyK8sNotRunning 0.26
263 TestNoKubernetes/serial/ProfileList 1.23
264 TestNoKubernetes/serial/Stop 1.26
265 TestNoKubernetes/serial/StartNoArgs 7.21
266 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.37
267 TestStoppedBinaryUpgrade/Setup 0.69
268 TestStoppedBinaryUpgrade/Upgrade 62.18
269 TestStoppedBinaryUpgrade/MinikubeLogs 1.24
278 TestPause/serial/Start 84.58
279 TestPause/serial/SecondStartNoReconfiguration 24.75
288 TestNetworkPlugins/group/false 3.66
293 TestStartStop/group/old-k8s-version/serial/FirstStart 62.69
294 TestStartStop/group/old-k8s-version/serial/DeployApp 8.69
296 TestStartStop/group/old-k8s-version/serial/Stop 12.12
298 TestStartStop/group/no-preload/serial/FirstStart 72.44
299 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.3
300 TestStartStop/group/old-k8s-version/serial/SecondStart 60.35
301 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
302 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
303 TestStartStop/group/no-preload/serial/DeployApp 8.4
304 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
307 TestStartStop/group/no-preload/serial/Stop 12.08
309 TestStartStop/group/embed-certs/serial/FirstStart 89.97
310 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
311 TestStartStop/group/no-preload/serial/SecondStart 63.91
312 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
313 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
314 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.23
316 TestStartStop/group/embed-certs/serial/DeployApp 10.45
318 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 88.52
320 TestStartStop/group/embed-certs/serial/Stop 12.03
321 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.26
322 TestStartStop/group/embed-certs/serial/SecondStart 53.02
323 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
324 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.14
325 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.23
327 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.44
329 TestStartStop/group/newest-cni/serial/FirstStart 46.53
331 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.05
332 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.26
333 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 53.92
334 TestStartStop/group/newest-cni/serial/DeployApp 0
336 TestStartStop/group/newest-cni/serial/Stop 2.17
337 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
338 TestStartStop/group/newest-cni/serial/SecondStart 14.79
339 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
340 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
341 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
343 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
344 TestNetworkPlugins/group/auto/Start 87.86
345 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.16
346 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.32
348 TestNetworkPlugins/group/flannel/Start 62.82
349 TestNetworkPlugins/group/flannel/ControllerPod 6.01
350 TestNetworkPlugins/group/flannel/KubeletFlags 0.35
351 TestNetworkPlugins/group/flannel/NetCatPod 11.28
352 TestNetworkPlugins/group/auto/KubeletFlags 0.38
353 TestNetworkPlugins/group/auto/NetCatPod 12.34
354 TestNetworkPlugins/group/flannel/DNS 0.17
355 TestNetworkPlugins/group/flannel/Localhost 0.13
356 TestNetworkPlugins/group/flannel/HairPin 0.14
357 TestNetworkPlugins/group/auto/DNS 0.16
358 TestNetworkPlugins/group/auto/Localhost 0.17
359 TestNetworkPlugins/group/auto/HairPin 0.13
360 TestNetworkPlugins/group/calico/Start 72.62
361 TestNetworkPlugins/group/custom-flannel/Start 71.92
362 TestNetworkPlugins/group/calico/ControllerPod 6.01
363 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.36
364 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.26
365 TestNetworkPlugins/group/calico/KubeletFlags 0.33
366 TestNetworkPlugins/group/calico/NetCatPod 12.35
367 TestNetworkPlugins/group/custom-flannel/DNS 0.17
368 TestNetworkPlugins/group/custom-flannel/Localhost 0.13
369 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
370 TestNetworkPlugins/group/calico/DNS 0.17
371 TestNetworkPlugins/group/calico/Localhost 0.15
372 TestNetworkPlugins/group/calico/HairPin 0.12
373 TestNetworkPlugins/group/kindnet/Start 91.63
374 TestNetworkPlugins/group/bridge/Start 52.95
375 TestNetworkPlugins/group/bridge/KubeletFlags 0.32
376 TestNetworkPlugins/group/bridge/NetCatPod 11.27
377 TestNetworkPlugins/group/bridge/DNS 0.16
378 TestNetworkPlugins/group/bridge/Localhost 0.13
379 TestNetworkPlugins/group/bridge/HairPin 0.13
380 TestNetworkPlugins/group/enable-default-cni/Start 87.93
381 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
382 TestNetworkPlugins/group/kindnet/KubeletFlags 0.4
383 TestNetworkPlugins/group/kindnet/NetCatPod 11.32
384 TestNetworkPlugins/group/kindnet/DNS 0.21
385 TestNetworkPlugins/group/kindnet/Localhost 0.18
386 TestNetworkPlugins/group/kindnet/HairPin 0.2
387 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.29
388 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.28
389 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
390 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
391 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
x
+
TestDownloadOnly/v1.28.0/json-events (6.18s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-487194 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-487194 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (6.179255819s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (6.18s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1003 18:26:44.957159  286434 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1003 18:26:44.957238  286434 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21625-284583/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-487194
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-487194: exit status 85 (127.802772ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-487194 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-487194 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/03 18:26:38
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 18:26:38.827335  286439 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:26:38.827563  286439 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:26:38.827595  286439 out.go:374] Setting ErrFile to fd 2...
	I1003 18:26:38.827618  286439 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:26:38.827933  286439 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-284583/.minikube/bin
	W1003 18:26:38.828102  286439 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21625-284583/.minikube/config/config.json: open /home/jenkins/minikube-integration/21625-284583/.minikube/config/config.json: no such file or directory
	I1003 18:26:38.828598  286439 out.go:368] Setting JSON to true
	I1003 18:26:38.829524  286439 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":4150,"bootTime":1759511849,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1003 18:26:38.829625  286439 start.go:140] virtualization:  
	I1003 18:26:38.833708  286439 out.go:99] [download-only-487194] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1003 18:26:38.833905  286439 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21625-284583/.minikube/cache/preloaded-tarball: no such file or directory
	I1003 18:26:38.833954  286439 notify.go:220] Checking for updates...
	I1003 18:26:38.836995  286439 out.go:171] MINIKUBE_LOCATION=21625
	I1003 18:26:38.840192  286439 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 18:26:38.843228  286439 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21625-284583/kubeconfig
	I1003 18:26:38.846245  286439 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-284583/.minikube
	I1003 18:26:38.849275  286439 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1003 18:26:38.855033  286439 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1003 18:26:38.855288  286439 driver.go:421] Setting default libvirt URI to qemu:///system
	I1003 18:26:38.876818  286439 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1003 18:26:38.876936  286439 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:26:38.934321  286439 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:63 SystemTime:2025-10-03 18:26:38.925142022 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1003 18:26:38.934432  286439 docker.go:318] overlay module found
	I1003 18:26:38.937436  286439 out.go:99] Using the docker driver based on user configuration
	I1003 18:26:38.937475  286439 start.go:304] selected driver: docker
	I1003 18:26:38.937486  286439 start.go:924] validating driver "docker" against <nil>
	I1003 18:26:38.937603  286439 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:26:38.993336  286439 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:63 SystemTime:2025-10-03 18:26:38.983345926 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1003 18:26:38.993494  286439 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1003 18:26:38.993793  286439 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1003 18:26:38.993956  286439 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1003 18:26:38.997071  286439 out.go:171] Using Docker driver with root privileges
	I1003 18:26:39.000153  286439 cni.go:84] Creating CNI manager for ""
	I1003 18:26:39.000241  286439 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 18:26:39.000264  286439 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1003 18:26:39.000355  286439 start.go:348] cluster config:
	{Name:download-only-487194 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-487194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 18:26:39.003455  286439 out.go:99] Starting "download-only-487194" primary control-plane node in "download-only-487194" cluster
	I1003 18:26:39.003495  286439 cache.go:123] Beginning downloading kic base image for docker with crio
	I1003 18:26:39.006517  286439 out.go:99] Pulling base image v0.0.48-1759382731-21643 ...
	I1003 18:26:39.006563  286439 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1003 18:26:39.006683  286439 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1003 18:26:39.022199  286439 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d to local cache
	I1003 18:26:39.022405  286439 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory
	I1003 18:26:39.022503  286439 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d to local cache
	I1003 18:26:39.065184  286439 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1003 18:26:39.065209  286439 cache.go:58] Caching tarball of preloaded images
	I1003 18:26:39.066062  286439 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1003 18:26:39.069292  286439 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1003 18:26:39.069316  286439 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1003 18:26:39.170698  286439 preload.go:290] Got checksum from GCS API "e092595ade89dbfc477bd4cd6b9c633b"
	I1003 18:26:39.170859  286439 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/21625-284583/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-487194 host does not exist
	  To start a cluster, run: "minikube start -p download-only-487194"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.32s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.32s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-487194
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (5.83s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-217819 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-217819 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.833676889s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (5.83s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1003 18:26:51.382240  286434 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1003 18:26:51.382275  286434 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21625-284583/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-217819
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-217819: exit status 85 (93.41263ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-487194 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-487194 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ delete  │ -p download-only-487194                                                                                                                                                   │ download-only-487194 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │ 03 Oct 25 18:26 UTC │
	│ start   │ -o=json --download-only -p download-only-217819 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-217819 │ jenkins │ v1.37.0 │ 03 Oct 25 18:26 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/03 18:26:45
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1003 18:26:45.595305  286639 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:26:45.595743  286639 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:26:45.595788  286639 out.go:374] Setting ErrFile to fd 2...
	I1003 18:26:45.595807  286639 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:26:45.596155  286639 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-284583/.minikube/bin
	I1003 18:26:45.596641  286639 out.go:368] Setting JSON to true
	I1003 18:26:45.597541  286639 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":4157,"bootTime":1759511849,"procs":149,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1003 18:26:45.597644  286639 start.go:140] virtualization:  
	I1003 18:26:45.600933  286639 out.go:99] [download-only-217819] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1003 18:26:45.601133  286639 notify.go:220] Checking for updates...
	I1003 18:26:45.603995  286639 out.go:171] MINIKUBE_LOCATION=21625
	I1003 18:26:45.606919  286639 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 18:26:45.609731  286639 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21625-284583/kubeconfig
	I1003 18:26:45.612521  286639 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-284583/.minikube
	I1003 18:26:45.615505  286639 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1003 18:26:45.621139  286639 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1003 18:26:45.621400  286639 driver.go:421] Setting default libvirt URI to qemu:///system
	I1003 18:26:45.653365  286639 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1003 18:26:45.653485  286639 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:26:45.710162  286639 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:51 SystemTime:2025-10-03 18:26:45.701472567 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1003 18:26:45.710274  286639 docker.go:318] overlay module found
	I1003 18:26:45.713245  286639 out.go:99] Using the docker driver based on user configuration
	I1003 18:26:45.713282  286639 start.go:304] selected driver: docker
	I1003 18:26:45.713294  286639 start.go:924] validating driver "docker" against <nil>
	I1003 18:26:45.713394  286639 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:26:45.772563  286639 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:51 SystemTime:2025-10-03 18:26:45.763113196 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1003 18:26:45.772717  286639 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1003 18:26:45.773046  286639 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1003 18:26:45.773202  286639 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1003 18:26:45.776317  286639 out.go:171] Using Docker driver with root privileges
	I1003 18:26:45.779087  286639 cni.go:84] Creating CNI manager for ""
	I1003 18:26:45.779159  286639 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1003 18:26:45.779174  286639 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1003 18:26:45.779257  286639 start.go:348] cluster config:
	{Name:download-only-217819 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-217819 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 18:26:45.782292  286639 out.go:99] Starting "download-only-217819" primary control-plane node in "download-only-217819" cluster
	I1003 18:26:45.782316  286639 cache.go:123] Beginning downloading kic base image for docker with crio
	I1003 18:26:45.785149  286639 out.go:99] Pulling base image v0.0.48-1759382731-21643 ...
	I1003 18:26:45.785190  286639 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 18:26:45.785355  286639 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local docker daemon
	I1003 18:26:45.801143  286639 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d to local cache
	I1003 18:26:45.801278  286639 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory
	I1003 18:26:45.801303  286639 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d in local cache directory, skipping pull
	I1003 18:26:45.801309  286639 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d exists in cache, skipping pull
	I1003 18:26:45.801319  286639 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d as a tarball
	I1003 18:26:45.848485  286639 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1003 18:26:45.848512  286639 cache.go:58] Caching tarball of preloaded images
	I1003 18:26:45.849385  286639 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1003 18:26:45.852460  286639 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1003 18:26:45.852485  286639 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1003 18:26:45.944027  286639 preload.go:290] Got checksum from GCS API "bc3e4aa50814345ef9ba3452bb5efb9f"
	I1003 18:26:45.944098  286639 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4?checksum=md5:bc3e4aa50814345ef9ba3452bb5efb9f -> /home/jenkins/minikube-integration/21625-284583/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-217819 host does not exist
	  To start a cluster, run: "minikube start -p download-only-217819"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-217819
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.61s)

                                                
                                                
=== RUN   TestBinaryMirror
I1003 18:26:52.533026  286434 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-482654 --alsologtostderr --binary-mirror http://127.0.0.1:38575 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-482654" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-482654
--- PASS: TestBinaryMirror (0.61s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-952140
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-952140: exit status 85 (67.620717ms)

                                                
                                                
-- stdout --
	* Profile "addons-952140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-952140"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-952140
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-952140: exit status 85 (78.183658ms)

                                                
                                                
-- stdout --
	* Profile "addons-952140" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-952140"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (171.27s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-952140 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-952140 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m51.262777636s)
--- PASS: TestAddons/Setup (171.27s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.23s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-952140 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-952140 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.23s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.92s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-952140 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-952140 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [16cbc4dc-bf5a-40a0-892a-b3483ba80b7d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [16cbc4dc-bf5a-40a0-892a-b3483ba80b7d] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.003609261s
addons_test.go:694: (dbg) Run:  kubectl --context addons-952140 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-952140 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-952140 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-952140 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.92s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.28s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-952140
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-952140: (11.993123204s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-952140
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-952140
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-952140
--- PASS: TestAddons/StoppedEnableDisable (12.28s)

                                                
                                    
x
+
TestCertOptions (39.74s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-305866 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
E1003 19:34:45.417539  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-305866 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (36.760194216s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-305866 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-305866 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-305866 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-305866" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-305866
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-305866: (2.271103872s)
--- PASS: TestCertOptions (39.74s)

                                                
                                    
x
+
TestCertExpiration (232.88s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-324520 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-324520 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (31.749428194s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-324520 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-324520 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (18.709085297s)
helpers_test.go:175: Cleaning up "cert-expiration-324520" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-324520
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-324520: (2.419860663s)
--- PASS: TestCertExpiration (232.88s)

                                                
                                    
x
+
TestErrorSpam/setup (35.52s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-263733 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-263733 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-263733 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-263733 --driver=docker  --container-runtime=crio: (35.515155762s)
--- PASS: TestErrorSpam/setup (35.52s)

                                                
                                    
x
+
TestErrorSpam/start (0.81s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-263733 --log_dir /tmp/nospam-263733 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-263733 --log_dir /tmp/nospam-263733 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-263733 --log_dir /tmp/nospam-263733 start --dry-run
--- PASS: TestErrorSpam/start (0.81s)

                                                
                                    
x
+
TestErrorSpam/status (1.05s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-263733 --log_dir /tmp/nospam-263733 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-263733 --log_dir /tmp/nospam-263733 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-263733 --log_dir /tmp/nospam-263733 status
--- PASS: TestErrorSpam/status (1.05s)

                                                
                                    
x
+
TestErrorSpam/pause (5.9s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-263733 --log_dir /tmp/nospam-263733 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-263733 --log_dir /tmp/nospam-263733 pause: exit status 80 (1.876237732s)

                                                
                                                
-- stdout --
	* Pausing node nospam-263733 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-03T18:33:44Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-263733 --log_dir /tmp/nospam-263733 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-263733 --log_dir /tmp/nospam-263733 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-263733 --log_dir /tmp/nospam-263733 pause: exit status 80 (1.751564733s)

                                                
                                                
-- stdout --
	* Pausing node nospam-263733 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-03T18:33:46Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-263733 --log_dir /tmp/nospam-263733 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-263733 --log_dir /tmp/nospam-263733 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-263733 --log_dir /tmp/nospam-263733 pause: exit status 80 (2.274050823s)

                                                
                                                
-- stdout --
	* Pausing node nospam-263733 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-03T18:33:48Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-263733 --log_dir /tmp/nospam-263733 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (5.90s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.43s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-263733 --log_dir /tmp/nospam-263733 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-263733 --log_dir /tmp/nospam-263733 unpause: exit status 80 (1.882190528s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-263733 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-03T18:33:50Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-263733 --log_dir /tmp/nospam-263733 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-263733 --log_dir /tmp/nospam-263733 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-263733 --log_dir /tmp/nospam-263733 unpause: exit status 80 (1.748744619s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-263733 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-03T18:33:52Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-263733 --log_dir /tmp/nospam-263733 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-263733 --log_dir /tmp/nospam-263733 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-263733 --log_dir /tmp/nospam-263733 unpause: exit status 80 (1.797231886s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-263733 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-03T18:33:54Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-263733 --log_dir /tmp/nospam-263733 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.43s)

                                                
                                    
x
+
TestErrorSpam/stop (1.43s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-263733 --log_dir /tmp/nospam-263733 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-263733 --log_dir /tmp/nospam-263733 stop: (1.223002025s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-263733 --log_dir /tmp/nospam-263733 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-263733 --log_dir /tmp/nospam-263733 stop
--- PASS: TestErrorSpam/stop (1.43s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21625-284583/.minikube/files/etc/test/nested/copy/286434/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (79.16s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-680560 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1003 18:34:45.421424  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:34:45.428582  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:34:45.440143  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:34:45.461769  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:34:45.503232  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:34:45.584852  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:34:45.746408  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:34:46.068259  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:34:46.710274  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:34:47.991648  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:34:50.553500  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:34:55.675005  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:35:05.916643  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-680560 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m19.158749153s)
--- PASS: TestFunctional/serial/StartWithProxy (79.16s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (42.14s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1003 18:35:19.753456  286434 config.go:182] Loaded profile config "functional-680560": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-680560 --alsologtostderr -v=8
E1003 18:35:26.398244  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-680560 --alsologtostderr -v=8: (42.13098967s)
functional_test.go:678: soft start took 42.135337053s for "functional-680560" cluster.
I1003 18:36:01.884794  286434 config.go:182] Loaded profile config "functional-680560": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (42.14s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-680560 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.52s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-680560 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-680560 cache add registry.k8s.io/pause:3.1: (1.15275112s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-680560 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-680560 cache add registry.k8s.io/pause:3.3: (1.269733307s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-680560 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-680560 cache add registry.k8s.io/pause:latest: (1.102139817s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.52s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-680560 /tmp/TestFunctionalserialCacheCmdcacheadd_local2617708398/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-680560 cache add minikube-local-cache-test:functional-680560
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-680560 cache delete minikube-local-cache-test:functional-680560
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-680560
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-680560 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.79s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-680560 ssh sudo crictl rmi registry.k8s.io/pause:latest
E1003 18:36:07.359618  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-680560 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-680560 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (297.617602ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-680560 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-680560 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.79s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-680560 kubectl -- --context functional-680560 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-680560 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (33.5s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-680560 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-680560 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (33.497313203s)
functional_test.go:776: restart took 33.4974112s for "functional-680560" cluster.
I1003 18:36:42.729109  286434 config.go:182] Loaded profile config "functional-680560": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (33.50s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-680560 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.44s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-680560 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-680560 logs: (1.436185762s)
--- PASS: TestFunctional/serial/LogsCmd (1.44s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.65s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-680560 logs --file /tmp/TestFunctionalserialLogsFileCmd2592706534/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-680560 logs --file /tmp/TestFunctionalserialLogsFileCmd2592706534/001/logs.txt: (1.650156722s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.65s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.11s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-680560 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-680560
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-680560: exit status 115 (382.482955ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31556 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-680560 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.11s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-680560 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-680560 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-680560 config get cpus: exit status 14 (89.822782ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-680560 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-680560 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-680560 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-680560 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-680560 config get cpus: exit status 14 (67.426182ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-680560 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-680560 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 312475: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.34s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-680560 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-680560 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (190.12864ms)

                                                
                                                
-- stdout --
	* [functional-680560] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21625
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21625-284583/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-284583/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 18:47:17.640447  312042 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:47:17.640635  312042 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:47:17.640668  312042 out.go:374] Setting ErrFile to fd 2...
	I1003 18:47:17.640700  312042 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:47:17.641113  312042 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-284583/.minikube/bin
	I1003 18:47:17.642549  312042 out.go:368] Setting JSON to false
	I1003 18:47:17.643485  312042 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":5389,"bootTime":1759511849,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1003 18:47:17.643604  312042 start.go:140] virtualization:  
	I1003 18:47:17.649396  312042 out.go:179] * [functional-680560] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1003 18:47:17.652895  312042 out.go:179]   - MINIKUBE_LOCATION=21625
	I1003 18:47:17.653001  312042 notify.go:220] Checking for updates...
	I1003 18:47:17.658988  312042 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 18:47:17.661938  312042 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21625-284583/kubeconfig
	I1003 18:47:17.664813  312042 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-284583/.minikube
	I1003 18:47:17.667613  312042 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1003 18:47:17.670525  312042 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 18:47:17.673911  312042 config.go:182] Loaded profile config "functional-680560": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:47:17.674621  312042 driver.go:421] Setting default libvirt URI to qemu:///system
	I1003 18:47:17.697690  312042 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1003 18:47:17.697839  312042 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:47:17.763460  312042 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-03 18:47:17.753315137 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1003 18:47:17.763565  312042 docker.go:318] overlay module found
	I1003 18:47:17.766682  312042 out.go:179] * Using the docker driver based on existing profile
	I1003 18:47:17.769656  312042 start.go:304] selected driver: docker
	I1003 18:47:17.769678  312042 start.go:924] validating driver "docker" against &{Name:functional-680560 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-680560 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 18:47:17.769789  312042 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 18:47:17.773303  312042 out.go:203] 
	W1003 18:47:17.776120  312042 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1003 18:47:17.778959  312042 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-680560 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-680560 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-680560 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (209.288829ms)

                                                
                                                
-- stdout --
	* [functional-680560] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21625
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21625-284583/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-284583/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 18:47:17.443132  311995 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:47:17.443325  311995 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:47:17.443336  311995 out.go:374] Setting ErrFile to fd 2...
	I1003 18:47:17.443343  311995 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:47:17.444710  311995 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-284583/.minikube/bin
	I1003 18:47:17.445225  311995 out.go:368] Setting JSON to false
	I1003 18:47:17.446109  311995 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":5389,"bootTime":1759511849,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1003 18:47:17.446182  311995 start.go:140] virtualization:  
	I1003 18:47:17.449848  311995 out.go:179] * [functional-680560] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1003 18:47:17.453645  311995 notify.go:220] Checking for updates...
	I1003 18:47:17.456692  311995 out.go:179]   - MINIKUBE_LOCATION=21625
	I1003 18:47:17.459581  311995 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 18:47:17.462512  311995 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21625-284583/kubeconfig
	I1003 18:47:17.465401  311995 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-284583/.minikube
	I1003 18:47:17.468347  311995 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1003 18:47:17.471211  311995 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 18:47:17.474674  311995 config.go:182] Loaded profile config "functional-680560": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:47:17.475359  311995 driver.go:421] Setting default libvirt URI to qemu:///system
	I1003 18:47:17.511506  311995 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1003 18:47:17.511654  311995 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:47:17.572408  311995 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-03 18:47:17.562663376 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1003 18:47:17.572519  311995 docker.go:318] overlay module found
	I1003 18:47:17.575632  311995 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1003 18:47:17.578583  311995 start.go:304] selected driver: docker
	I1003 18:47:17.578609  311995 start.go:924] validating driver "docker" against &{Name:functional-680560 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1759382731-21643@sha256:ca1b4db171879edd6bbb9546a4b1afac2eb5be94a0f5528496e62d2ff99de37d Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-680560 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1003 18:47:17.578718  311995 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 18:47:17.582320  311995 out.go:203] 
	W1003 18:47:17.585320  311995 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1003 18:47:17.588157  311995 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-680560 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-680560 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-680560 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-680560 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-680560 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [63ff9706-92fd-45b1-8a79-0b55a924642a] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003521572s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-680560 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-680560 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-680560 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-680560 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [021cdde9-c70c-43c0-9606-4b8d60ac3d0f] Pending
helpers_test.go:352: "sp-pod" [021cdde9-c70c-43c0-9606-4b8d60ac3d0f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [021cdde9-c70c-43c0-9606-4b8d60ac3d0f] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.011697723s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-680560 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-680560 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-680560 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [227d2e09-ac30-4536-ab0e-f33827f1b956] Pending
helpers_test.go:352: "sp-pod" [227d2e09-ac30-4536-ab0e-f33827f1b956] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.002848278s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-680560 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.61s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-680560 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-680560 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-680560 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-680560 ssh -n functional-680560 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-680560 cp functional-680560:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2350196802/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-680560 ssh -n functional-680560 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-680560 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-680560 ssh -n functional-680560 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.45s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/286434/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-680560 ssh "sudo cat /etc/test/nested/copy/286434/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/286434.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-680560 ssh "sudo cat /etc/ssl/certs/286434.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/286434.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-680560 ssh "sudo cat /usr/share/ca-certificates/286434.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-680560 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/2864342.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-680560 ssh "sudo cat /etc/ssl/certs/2864342.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/2864342.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-680560 ssh "sudo cat /usr/share/ca-certificates/2864342.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-680560 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.59s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-680560 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-680560 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-680560 ssh "sudo systemctl is-active docker": exit status 1 (449.170156ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-680560 ssh "sudo systemctl is-active containerd"
2025/10/03 18:47:27 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-680560 ssh "sudo systemctl is-active containerd": exit status 1 (416.867652ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-680560 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-680560 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-680560 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 308594: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-680560 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-680560 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-680560 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [41187308-79c3-461a-8ba4-ba435d0ccccc] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [41187308-79c3-461a-8ba4-ba435d0ccccc] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.004396818s
I1003 18:37:00.382499  286434 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.43s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-680560 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.106.72.152 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-680560 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "366.532703ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "60.438494ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "375.798603ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "53.873687ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-680560 /tmp/TestFunctionalparallelMountCmdany-port3843511273/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1759517225613903787" to /tmp/TestFunctionalparallelMountCmdany-port3843511273/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1759517225613903787" to /tmp/TestFunctionalparallelMountCmdany-port3843511273/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1759517225613903787" to /tmp/TestFunctionalparallelMountCmdany-port3843511273/001/test-1759517225613903787
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-680560 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-680560 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (333.736586ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1003 18:47:05.947881  286434 retry.go:31] will retry after 414.335904ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-680560 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-680560 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  3 18:47 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  3 18:47 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  3 18:47 test-1759517225613903787
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-680560 ssh cat /mount-9p/test-1759517225613903787
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-680560 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [30e3cfa9-8700-4ca8-8c32-bbc77fe2037b] Pending
helpers_test.go:352: "busybox-mount" [30e3cfa9-8700-4ca8-8c32-bbc77fe2037b] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [30e3cfa9-8700-4ca8-8c32-bbc77fe2037b] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [30e3cfa9-8700-4ca8-8c32-bbc77fe2037b] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003528009s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-680560 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-680560 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-680560 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-680560 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-680560 /tmp/TestFunctionalparallelMountCmdany-port3843511273/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.73s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-680560 /tmp/TestFunctionalparallelMountCmdspecific-port3442052680/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-680560 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-680560 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (328.771391ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1003 18:47:13.672898  286434 retry.go:31] will retry after 358.963769ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-680560 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-680560 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-680560 /tmp/TestFunctionalparallelMountCmdspecific-port3442052680/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-680560 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-680560 ssh "sudo umount -f /mount-9p": exit status 1 (262.68231ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-680560 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-680560 /tmp/TestFunctionalparallelMountCmdspecific-port3442052680/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-680560 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2127995241/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-680560 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2127995241/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-680560 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2127995241/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-680560 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-680560 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-680560 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-680560 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-680560 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2127995241/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-680560 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2127995241/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-680560 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2127995241/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-680560 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-680560 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-arm64 -p functional-680560 service list -o json: (1.417348002s)
functional_test.go:1504: Took "1.417445497s" to run "out/minikube-linux-arm64 -p functional-680560 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.42s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-680560 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-680560 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-680560 version -o=json --components: (1.31415885s)
--- PASS: TestFunctional/parallel/Version/components (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-680560 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-680560 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-680560 image ls --format short --alsologtostderr:
I1003 18:47:33.354009  314747 out.go:360] Setting OutFile to fd 1 ...
I1003 18:47:33.354245  314747 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1003 18:47:33.354257  314747 out.go:374] Setting ErrFile to fd 2...
I1003 18:47:33.354262  314747 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1003 18:47:33.354585  314747 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-284583/.minikube/bin
I1003 18:47:33.355298  314747 config.go:182] Loaded profile config "functional-680560": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1003 18:47:33.355424  314747 config.go:182] Loaded profile config "functional-680560": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1003 18:47:33.356080  314747 cli_runner.go:164] Run: docker container inspect functional-680560 --format={{.State.Status}}
I1003 18:47:33.375873  314747 ssh_runner.go:195] Run: systemctl --version
I1003 18:47:33.375925  314747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-680560
I1003 18:47:33.400197  314747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/functional-680560/id_rsa Username:docker}
I1003 18:47:33.500353  314747 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-680560 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-680560 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 1611cd07b61d5 │ 3.77MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ a1894772a478e │ 206MB  │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ 43911e833d64d │ 84.8MB │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ 05baa95f5142d │ 75.9MB │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ docker.io/library/nginx                 │ latest             │ 0777d15d89ece │ 202MB  │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 138784d87c9c5 │ 73.2MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ 7eb2c6ff0c5a7 │ 72.6MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ b5f57ec6b9867 │ 51.6MB │
│ docker.io/library/nginx                 │ alpine             │ 35f3cbee4fb77 │ 54.3MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-680560 image ls --format table --alsologtostderr:
I1003 18:47:33.629102  314828 out.go:360] Setting OutFile to fd 1 ...
I1003 18:47:33.629256  314828 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1003 18:47:33.629391  314828 out.go:374] Setting ErrFile to fd 2...
I1003 18:47:33.629408  314828 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1003 18:47:33.631716  314828 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-284583/.minikube/bin
I1003 18:47:33.632523  314828 config.go:182] Loaded profile config "functional-680560": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1003 18:47:33.632686  314828 config.go:182] Loaded profile config "functional-680560": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1003 18:47:33.633261  314828 cli_runner.go:164] Run: docker container inspect functional-680560 --format={{.State.Status}}
I1003 18:47:33.663223  314828 ssh_runner.go:195] Run: systemctl --version
I1003 18:47:33.663280  314828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-680560
I1003 18:47:33.694409  314828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/functional-680560/id_rsa Username:docker}
I1003 18:47:33.792257  314828 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-680560 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-680560 image ls --format json --alsologtostderr:
[{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"35f3cbee4fb77c3efb39f2723a21ce181906139442a37de8ffc52d89641d9936","repoDigests":["docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8","docker.io/library/nginx@sha256:77d740efa8f9c4753f2a7212d8422b8c77411681971f400ea03d07fe38476cac"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54348302"},{"id":"0777d15d89ecedd8739877d62d8983e9f4b081efa23140db06299b0abe7a985b","repoDigests":["docker.io/library/nginx@sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc","docker.io/library/nginx@sha256:e041cf856a0f3790b5ef37a966f43d872fba48fcf4405fd3e8a28ac5f7436992"],"repoTags":["docker.io/library/n
ginx:latest"],"size":"202036629"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902","registry.k8s.io/kube-apiserver
@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"84753391"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],
"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"73195387"},{"id":"a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","repoDigests":["registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"205987068"},{"id":"7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f","registry.k8s.io/kube-controll
er-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"72629077"},{"id":"05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9","repoDigests":["registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6","registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"75938711"},{"id":"b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500","registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"51592017"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9d
bcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-680560 image ls --format json --alsologtostderr:
I1003 18:47:33.606653  314823 out.go:360] Setting OutFile to fd 1 ...
I1003 18:47:33.606881  314823 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1003 18:47:33.606912  314823 out.go:374] Setting ErrFile to fd 2...
I1003 18:47:33.606935  314823 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1003 18:47:33.607229  314823 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-284583/.minikube/bin
I1003 18:47:33.607903  314823 config.go:182] Loaded profile config "functional-680560": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1003 18:47:33.608074  314823 config.go:182] Loaded profile config "functional-680560": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1003 18:47:33.608602  314823 cli_runner.go:164] Run: docker container inspect functional-680560 --format={{.State.Status}}
I1003 18:47:33.650731  314823 ssh_runner.go:195] Run: systemctl --version
I1003 18:47:33.650793  314823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-680560
I1003 18:47:33.679439  314823 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/functional-680560/id_rsa Username:docker}
I1003 18:47:33.783082  314823 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-680560 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-680560 image ls --format yaml --alsologtostderr:
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests:
- registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "205987068"
- id: 05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9
repoDigests:
- registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "75938711"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 0777d15d89ecedd8739877d62d8983e9f4b081efa23140db06299b0abe7a985b
repoDigests:
- docker.io/library/nginx@sha256:8adbdcb969e2676478ee2c7ad333956f0c8e0e4c5a7463f4611d7a2e7a7ff5dc
- docker.io/library/nginx@sha256:e041cf856a0f3790b5ef37a966f43d872fba48fcf4405fd3e8a28ac5f7436992
repoTags:
- docker.io/library/nginx:latest
size: "202036629"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 35f3cbee4fb77c3efb39f2723a21ce181906139442a37de8ffc52d89641d9936
repoDigests:
- docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8
- docker.io/library/nginx@sha256:77d740efa8f9c4753f2a7212d8422b8c77411681971f400ea03d07fe38476cac
repoTags:
- docker.io/library/nginx:alpine
size: "54348302"
- id: 7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "72629077"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "73195387"
- id: 43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
- registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "84753391"
- id: b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
- registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "51592017"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-680560 image ls --format yaml --alsologtostderr:
I1003 18:47:33.349895  314748 out.go:360] Setting OutFile to fd 1 ...
I1003 18:47:33.350324  314748 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1003 18:47:33.350333  314748 out.go:374] Setting ErrFile to fd 2...
I1003 18:47:33.350339  314748 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1003 18:47:33.350624  314748 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-284583/.minikube/bin
I1003 18:47:33.351262  314748 config.go:182] Loaded profile config "functional-680560": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1003 18:47:33.351408  314748 config.go:182] Loaded profile config "functional-680560": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1003 18:47:33.351855  314748 cli_runner.go:164] Run: docker container inspect functional-680560 --format={{.State.Status}}
I1003 18:47:33.375938  314748 ssh_runner.go:195] Run: systemctl --version
I1003 18:47:33.375975  314748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-680560
I1003 18:47:33.397385  314748 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/functional-680560/id_rsa Username:docker}
I1003 18:47:33.495578  314748 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-680560 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-680560 ssh pgrep buildkitd: exit status 1 (287.053038ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-680560 image build -t localhost/my-image:functional-680560 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-680560 image build -t localhost/my-image:functional-680560 testdata/build --alsologtostderr: (3.357100038s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-680560 image build -t localhost/my-image:functional-680560 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 354d0a5739a
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-680560
--> b0a11bce4bd
Successfully tagged localhost/my-image:functional-680560
b0a11bce4bdcb00537c680177c7de2f05e607120b70b298cf2c06a00d794492d
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-680560 image build -t localhost/my-image:functional-680560 testdata/build --alsologtostderr:
I1003 18:47:34.166717  314959 out.go:360] Setting OutFile to fd 1 ...
I1003 18:47:34.167607  314959 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1003 18:47:34.167623  314959 out.go:374] Setting ErrFile to fd 2...
I1003 18:47:34.167628  314959 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1003 18:47:34.167923  314959 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-284583/.minikube/bin
I1003 18:47:34.168581  314959 config.go:182] Loaded profile config "functional-680560": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1003 18:47:34.170061  314959 config.go:182] Loaded profile config "functional-680560": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1003 18:47:34.170548  314959 cli_runner.go:164] Run: docker container inspect functional-680560 --format={{.State.Status}}
I1003 18:47:34.188268  314959 ssh_runner.go:195] Run: systemctl --version
I1003 18:47:34.188325  314959 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-680560
I1003 18:47:34.205595  314959 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/functional-680560/id_rsa Username:docker}
I1003 18:47:34.303154  314959 build_images.go:161] Building image from path: /tmp/build.720373505.tar
I1003 18:47:34.303221  314959 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1003 18:47:34.311052  314959 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.720373505.tar
I1003 18:47:34.314565  314959 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.720373505.tar: stat -c "%s %y" /var/lib/minikube/build/build.720373505.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.720373505.tar': No such file or directory
I1003 18:47:34.314593  314959 ssh_runner.go:362] scp /tmp/build.720373505.tar --> /var/lib/minikube/build/build.720373505.tar (3072 bytes)
I1003 18:47:34.332678  314959 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.720373505
I1003 18:47:34.340425  314959 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.720373505 -xf /var/lib/minikube/build/build.720373505.tar
I1003 18:47:34.348679  314959 crio.go:315] Building image: /var/lib/minikube/build/build.720373505
I1003 18:47:34.348887  314959 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-680560 /var/lib/minikube/build/build.720373505 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1003 18:47:37.442114  314959 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-680560 /var/lib/minikube/build/build.720373505 --cgroup-manager=cgroupfs: (3.093194916s)
I1003 18:47:37.442189  314959 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.720373505
I1003 18:47:37.450331  314959 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.720373505.tar
I1003 18:47:37.458066  314959 build_images.go:217] Built localhost/my-image:functional-680560 from /tmp/build.720373505.tar
I1003 18:47:37.458099  314959 build_images.go:133] succeeded building to: functional-680560
I1003 18:47:37.458105  314959 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-680560 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-680560
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-680560 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-680560 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-680560 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-680560 image rm kicbase/echo-server:functional-680560 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-680560 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-680560
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-680560
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-680560
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (203.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1003 18:49:45.417151  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-717680 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (3m23.07342446s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (203.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 kubectl -- rollout status deployment/busybox
E1003 18:51:08.484617  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-717680 kubectl -- rollout status deployment/busybox: (3.964652322s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 kubectl -- exec busybox-7b57f96db7-df69f -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 kubectl -- exec busybox-7b57f96db7-ssl7f -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 kubectl -- exec busybox-7b57f96db7-zl6bc -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 kubectl -- exec busybox-7b57f96db7-df69f -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 kubectl -- exec busybox-7b57f96db7-ssl7f -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 kubectl -- exec busybox-7b57f96db7-zl6bc -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 kubectl -- exec busybox-7b57f96db7-df69f -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 kubectl -- exec busybox-7b57f96db7-ssl7f -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 kubectl -- exec busybox-7b57f96db7-zl6bc -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 kubectl -- exec busybox-7b57f96db7-df69f -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 kubectl -- exec busybox-7b57f96db7-df69f -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 kubectl -- exec busybox-7b57f96db7-ssl7f -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 kubectl -- exec busybox-7b57f96db7-ssl7f -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 kubectl -- exec busybox-7b57f96db7-zl6bc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 kubectl -- exec busybox-7b57f96db7-zl6bc -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (60.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 node add --alsologtostderr -v 5
E1003 18:51:51.958666  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/functional-680560/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:51:51.965548  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/functional-680560/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:51:51.976978  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/functional-680560/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:51:51.998369  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/functional-680560/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:51:52.039924  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/functional-680560/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:51:52.121377  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/functional-680560/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:51:52.282964  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/functional-680560/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:51:52.604781  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/functional-680560/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:51:53.246773  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/functional-680560/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:51:54.528588  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/functional-680560/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:51:57.089992  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/functional-680560/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:52:02.211786  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/functional-680560/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-717680 node add --alsologtostderr -v 5: (59.411943276s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 status --alsologtostderr -v 5
E1003 18:52:12.453460  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/functional-680560/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-717680 status --alsologtostderr -v 5: (1.006694509s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (60.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-717680 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.047272995s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-717680 status --output json --alsologtostderr -v 5: (1.0113264s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 cp testdata/cp-test.txt ha-717680:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 ssh -n ha-717680 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 cp ha-717680:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1930754841/001/cp-test_ha-717680.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 ssh -n ha-717680 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 cp ha-717680:/home/docker/cp-test.txt ha-717680-m02:/home/docker/cp-test_ha-717680_ha-717680-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 ssh -n ha-717680 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 ssh -n ha-717680-m02 "sudo cat /home/docker/cp-test_ha-717680_ha-717680-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 cp ha-717680:/home/docker/cp-test.txt ha-717680-m03:/home/docker/cp-test_ha-717680_ha-717680-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 ssh -n ha-717680 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 ssh -n ha-717680-m03 "sudo cat /home/docker/cp-test_ha-717680_ha-717680-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 cp ha-717680:/home/docker/cp-test.txt ha-717680-m04:/home/docker/cp-test_ha-717680_ha-717680-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 ssh -n ha-717680 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 ssh -n ha-717680-m04 "sudo cat /home/docker/cp-test_ha-717680_ha-717680-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 cp testdata/cp-test.txt ha-717680-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 ssh -n ha-717680-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 cp ha-717680-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1930754841/001/cp-test_ha-717680-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 ssh -n ha-717680-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 cp ha-717680-m02:/home/docker/cp-test.txt ha-717680:/home/docker/cp-test_ha-717680-m02_ha-717680.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 ssh -n ha-717680-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 ssh -n ha-717680 "sudo cat /home/docker/cp-test_ha-717680-m02_ha-717680.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 cp ha-717680-m02:/home/docker/cp-test.txt ha-717680-m03:/home/docker/cp-test_ha-717680-m02_ha-717680-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 ssh -n ha-717680-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 ssh -n ha-717680-m03 "sudo cat /home/docker/cp-test_ha-717680-m02_ha-717680-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 cp ha-717680-m02:/home/docker/cp-test.txt ha-717680-m04:/home/docker/cp-test_ha-717680-m02_ha-717680-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 ssh -n ha-717680-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 ssh -n ha-717680-m04 "sudo cat /home/docker/cp-test_ha-717680-m02_ha-717680-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 cp testdata/cp-test.txt ha-717680-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 ssh -n ha-717680-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 cp ha-717680-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1930754841/001/cp-test_ha-717680-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 ssh -n ha-717680-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 cp ha-717680-m03:/home/docker/cp-test.txt ha-717680:/home/docker/cp-test_ha-717680-m03_ha-717680.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 ssh -n ha-717680-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 ssh -n ha-717680 "sudo cat /home/docker/cp-test_ha-717680-m03_ha-717680.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 cp ha-717680-m03:/home/docker/cp-test.txt ha-717680-m02:/home/docker/cp-test_ha-717680-m03_ha-717680-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 ssh -n ha-717680-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 ssh -n ha-717680-m02 "sudo cat /home/docker/cp-test_ha-717680-m03_ha-717680-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 cp ha-717680-m03:/home/docker/cp-test.txt ha-717680-m04:/home/docker/cp-test_ha-717680-m03_ha-717680-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 ssh -n ha-717680-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 ssh -n ha-717680-m04 "sudo cat /home/docker/cp-test_ha-717680-m03_ha-717680-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 cp testdata/cp-test.txt ha-717680-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 ssh -n ha-717680-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 cp ha-717680-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1930754841/001/cp-test_ha-717680-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 ssh -n ha-717680-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 cp ha-717680-m04:/home/docker/cp-test.txt ha-717680:/home/docker/cp-test_ha-717680-m04_ha-717680.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 ssh -n ha-717680-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 ssh -n ha-717680 "sudo cat /home/docker/cp-test_ha-717680-m04_ha-717680.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 cp ha-717680-m04:/home/docker/cp-test.txt ha-717680-m02:/home/docker/cp-test_ha-717680-m04_ha-717680-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 ssh -n ha-717680-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 ssh -n ha-717680-m02 "sudo cat /home/docker/cp-test_ha-717680-m04_ha-717680-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 cp ha-717680-m04:/home/docker/cp-test.txt ha-717680-m03:/home/docker/cp-test_ha-717680-m04_ha-717680-m03.txt
E1003 18:52:32.938701  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/functional-680560/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 ssh -n ha-717680-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 ssh -n ha-717680-m03 "sudo cat /home/docker/cp-test_ha-717680-m04_ha-717680-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-717680 node stop m02 --alsologtostderr -v 5: (12.012882361s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-717680 status --alsologtostderr -v 5: exit status 7 (758.835961ms)

                                                
                                                
-- stdout --
	ha-717680
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-717680-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-717680-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-717680-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 18:52:45.650089  329773 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:52:45.650295  329773 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:52:45.650322  329773 out.go:374] Setting ErrFile to fd 2...
	I1003 18:52:45.650384  329773 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:52:45.650800  329773 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-284583/.minikube/bin
	I1003 18:52:45.651065  329773 out.go:368] Setting JSON to false
	I1003 18:52:45.651117  329773 mustload.go:65] Loading cluster: ha-717680
	I1003 18:52:45.651956  329773 notify.go:220] Checking for updates...
	I1003 18:52:45.652221  329773 config.go:182] Loaded profile config "ha-717680": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:52:45.652252  329773 status.go:174] checking status of ha-717680 ...
	I1003 18:52:45.653396  329773 cli_runner.go:164] Run: docker container inspect ha-717680 --format={{.State.Status}}
	I1003 18:52:45.673335  329773 status.go:371] ha-717680 host status = "Running" (err=<nil>)
	I1003 18:52:45.673396  329773 host.go:66] Checking if "ha-717680" exists ...
	I1003 18:52:45.673713  329773 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-717680
	I1003 18:52:45.711776  329773 host.go:66] Checking if "ha-717680" exists ...
	I1003 18:52:45.712100  329773 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 18:52:45.712147  329773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-717680
	I1003 18:52:45.729335  329773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/ha-717680/id_rsa Username:docker}
	I1003 18:52:45.826653  329773 ssh_runner.go:195] Run: systemctl --version
	I1003 18:52:45.833224  329773 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 18:52:45.847488  329773 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 18:52:45.907417  329773 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-10-03 18:52:45.897780864 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1003 18:52:45.907957  329773 kubeconfig.go:125] found "ha-717680" server: "https://192.168.49.254:8443"
	I1003 18:52:45.908002  329773 api_server.go:166] Checking apiserver status ...
	I1003 18:52:45.908047  329773 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:52:45.920452  329773 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1266/cgroup
	I1003 18:52:45.929471  329773 api_server.go:182] apiserver freezer: "4:freezer:/docker/913d59c6a57a5b363b31c6c965328e85f70c5d7a56a969f55e4343ac53814c14/crio/crio-cbdaaa3df51a01ea9b211f298ef128a62571453f8238a7f4d43dd6765554a4be"
	I1003 18:52:45.929541  329773 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/913d59c6a57a5b363b31c6c965328e85f70c5d7a56a969f55e4343ac53814c14/crio/crio-cbdaaa3df51a01ea9b211f298ef128a62571453f8238a7f4d43dd6765554a4be/freezer.state
	I1003 18:52:45.937348  329773 api_server.go:204] freezer state: "THAWED"
	I1003 18:52:45.937376  329773 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1003 18:52:45.946005  329773 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1003 18:52:45.946033  329773 status.go:463] ha-717680 apiserver status = Running (err=<nil>)
	I1003 18:52:45.946044  329773 status.go:176] ha-717680 status: &{Name:ha-717680 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1003 18:52:45.946061  329773 status.go:174] checking status of ha-717680-m02 ...
	I1003 18:52:45.946392  329773 cli_runner.go:164] Run: docker container inspect ha-717680-m02 --format={{.State.Status}}
	I1003 18:52:45.964033  329773 status.go:371] ha-717680-m02 host status = "Stopped" (err=<nil>)
	I1003 18:52:45.964054  329773 status.go:384] host is not running, skipping remaining checks
	I1003 18:52:45.964061  329773 status.go:176] ha-717680-m02 status: &{Name:ha-717680-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1003 18:52:45.964081  329773 status.go:174] checking status of ha-717680-m03 ...
	I1003 18:52:45.964391  329773 cli_runner.go:164] Run: docker container inspect ha-717680-m03 --format={{.State.Status}}
	I1003 18:52:45.996100  329773 status.go:371] ha-717680-m03 host status = "Running" (err=<nil>)
	I1003 18:52:45.996127  329773 host.go:66] Checking if "ha-717680-m03" exists ...
	I1003 18:52:45.996457  329773 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-717680-m03
	I1003 18:52:46.025802  329773 host.go:66] Checking if "ha-717680-m03" exists ...
	I1003 18:52:46.026132  329773 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 18:52:46.026176  329773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-717680-m03
	I1003 18:52:46.045551  329773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/ha-717680-m03/id_rsa Username:docker}
	I1003 18:52:46.143366  329773 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 18:52:46.156937  329773 kubeconfig.go:125] found "ha-717680" server: "https://192.168.49.254:8443"
	I1003 18:52:46.156968  329773 api_server.go:166] Checking apiserver status ...
	I1003 18:52:46.157011  329773 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 18:52:46.168540  329773 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1198/cgroup
	I1003 18:52:46.176774  329773 api_server.go:182] apiserver freezer: "4:freezer:/docker/ca8284e76d99c492a8bb1d01ed750795e2f101766f6737a7e32623a4f2cad961/crio/crio-99c61066f6102f00ab6daca566c297362a52a6d63413b144384d538c8c6a7bcd"
	I1003 18:52:46.176902  329773 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/ca8284e76d99c492a8bb1d01ed750795e2f101766f6737a7e32623a4f2cad961/crio/crio-99c61066f6102f00ab6daca566c297362a52a6d63413b144384d538c8c6a7bcd/freezer.state
	I1003 18:52:46.184210  329773 api_server.go:204] freezer state: "THAWED"
	I1003 18:52:46.184250  329773 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1003 18:52:46.192477  329773 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1003 18:52:46.192507  329773 status.go:463] ha-717680-m03 apiserver status = Running (err=<nil>)
	I1003 18:52:46.192517  329773 status.go:176] ha-717680-m03 status: &{Name:ha-717680-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1003 18:52:46.192533  329773 status.go:174] checking status of ha-717680-m04 ...
	I1003 18:52:46.192886  329773 cli_runner.go:164] Run: docker container inspect ha-717680-m04 --format={{.State.Status}}
	I1003 18:52:46.211061  329773 status.go:371] ha-717680-m04 host status = "Running" (err=<nil>)
	I1003 18:52:46.211083  329773 host.go:66] Checking if "ha-717680-m04" exists ...
	I1003 18:52:46.211456  329773 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-717680-m04
	I1003 18:52:46.228801  329773 host.go:66] Checking if "ha-717680-m04" exists ...
	I1003 18:52:46.229104  329773 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 18:52:46.229148  329773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-717680-m04
	I1003 18:52:46.249823  329773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/ha-717680-m04/id_rsa Username:docker}
	I1003 18:52:46.345922  329773 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 18:52:46.358619  329773 status.go:176] ha-717680-m04 status: &{Name:ha-717680-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (32.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 node start m02 --alsologtostderr -v 5
E1003 18:53:13.900574  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/functional-680560/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-717680 node start m02 --alsologtostderr -v 5: (30.799270705s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-717680 status --alsologtostderr -v 5: (1.254097092s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (32.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.222622072s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (118.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-717680 stop --alsologtostderr -v 5: (26.50914042s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 start --wait true --alsologtostderr -v 5
E1003 18:54:35.822601  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/functional-680560/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:54:45.417143  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-717680 start --wait true --alsologtostderr -v 5: (1m31.897632306s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (118.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-717680 node delete m03 --alsologtostderr -v 5: (9.879310107s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-717680 stop --alsologtostderr -v 5: (35.595775736s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-717680 status --alsologtostderr -v 5: exit status 7 (112.678987ms)

                                                
                                                
-- stdout --
	ha-717680
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-717680-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-717680-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 18:56:06.522248  341527 out.go:360] Setting OutFile to fd 1 ...
	I1003 18:56:06.522624  341527 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:56:06.522657  341527 out.go:374] Setting ErrFile to fd 2...
	I1003 18:56:06.522676  341527 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 18:56:06.522968  341527 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-284583/.minikube/bin
	I1003 18:56:06.523206  341527 out.go:368] Setting JSON to false
	I1003 18:56:06.523262  341527 mustload.go:65] Loading cluster: ha-717680
	I1003 18:56:06.523795  341527 config.go:182] Loaded profile config "ha-717680": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 18:56:06.523877  341527 status.go:174] checking status of ha-717680 ...
	I1003 18:56:06.524429  341527 cli_runner.go:164] Run: docker container inspect ha-717680 --format={{.State.Status}}
	I1003 18:56:06.523852  341527 notify.go:220] Checking for updates...
	I1003 18:56:06.541348  341527 status.go:371] ha-717680 host status = "Stopped" (err=<nil>)
	I1003 18:56:06.541371  341527 status.go:384] host is not running, skipping remaining checks
	I1003 18:56:06.541378  341527 status.go:176] ha-717680 status: &{Name:ha-717680 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1003 18:56:06.541404  341527 status.go:174] checking status of ha-717680-m02 ...
	I1003 18:56:06.541707  341527 cli_runner.go:164] Run: docker container inspect ha-717680-m02 --format={{.State.Status}}
	I1003 18:56:06.562262  341527 status.go:371] ha-717680-m02 host status = "Stopped" (err=<nil>)
	I1003 18:56:06.562286  341527 status.go:384] host is not running, skipping remaining checks
	I1003 18:56:06.562302  341527 status.go:176] ha-717680-m02 status: &{Name:ha-717680-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1003 18:56:06.562322  341527 status.go:174] checking status of ha-717680-m04 ...
	I1003 18:56:06.562616  341527 cli_runner.go:164] Run: docker container inspect ha-717680-m04 --format={{.State.Status}}
	I1003 18:56:06.584381  341527 status.go:371] ha-717680-m04 host status = "Stopped" (err=<nil>)
	I1003 18:56:06.584402  341527 status.go:384] host is not running, skipping remaining checks
	I1003 18:56:06.584408  341527 status.go:176] ha-717680-m04 status: &{Name:ha-717680-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (162.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1003 18:56:51.960892  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/functional-680560/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 18:57:19.665534  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/functional-680560/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-717680 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (2m41.858361751s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (162.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (80.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 node add --control-plane --alsologtostderr -v 5
E1003 18:59:45.417386  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-717680 node add --control-plane --alsologtostderr -v 5: (1m19.713515207s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-717680 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-717680 status --alsologtostderr -v 5: (1.074491341s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (80.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.048879231s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.05s)

                                                
                                    
x
+
TestJSONOutput/start/Command (82.71s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-679462 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-679462 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m22.709760103s)
--- PASS: TestJSONOutput/start/Command (82.71s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.73s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-679462 --output=json --user=testUser
E1003 19:01:51.962270  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/functional-680560/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-679462 --output=json --user=testUser: (5.72986756s)
--- PASS: TestJSONOutput/stop/Command (5.73s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-562776 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-562776 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (99.889173ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ee29c85b-469d-43ee-bf31-de69249f9f09","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-562776] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"5b3a3f46-cf3e-4228-ab12-4b9d0fb59cee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21625"}}
	{"specversion":"1.0","id":"df74dbae-af94-4f6a-bdca-c7973b2ebce6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"24c8b187-be93-43c7-a2f8-09f976a7b6aa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21625-284583/kubeconfig"}}
	{"specversion":"1.0","id":"d1cb74cb-bf0d-4c27-a657-a763be1bceb0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-284583/.minikube"}}
	{"specversion":"1.0","id":"90e4a17d-bf47-4ac3-bb18-91b429bf3d7d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"9cd2dad7-0db4-420b-b396-a19ceb6874fe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"255ce9ad-67a6-480c-9204-5a9ef11fcdc9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-562776" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-562776
--- PASS: TestErrorJSONOutput (0.25s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (39.3s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-395522 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-395522 --network=: (37.137322199s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-395522" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-395522
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-395522: (2.133911181s)
--- PASS: TestKicCustomNetwork/create_custom_network (39.30s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (35.3s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-843883 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-843883 --network=bridge: (33.232387382s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-843883" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-843883
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-843883: (2.043458901s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (35.30s)

                                                
                                    
x
+
TestKicExistingNetwork (34.27s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1003 19:03:13.178455  286434 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1003 19:03:13.194610  286434 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1003 19:03:13.194696  286434 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1003 19:03:13.194718  286434 cli_runner.go:164] Run: docker network inspect existing-network
W1003 19:03:13.210812  286434 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1003 19:03:13.210845  286434 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1003 19:03:13.210859  286434 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1003 19:03:13.210976  286434 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1003 19:03:13.229667  286434 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-3a8a28910ba8 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6e:7a:d0:f8:54:63} reservation:<nil>}
I1003 19:03:13.230045  286434 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4000345450}
I1003 19:03:13.230070  286434 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1003 19:03:13.230138  286434 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1003 19:03:13.300311  286434 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-337536 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-337536 --network=existing-network: (32.164848101s)
helpers_test.go:175: Cleaning up "existing-network-337536" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-337536
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-337536: (1.94608693s)
I1003 19:03:47.427759  286434 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (34.27s)

                                                
                                    
x
+
TestKicCustomSubnet (31.68s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-109944 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-109944 --subnet=192.168.60.0/24: (29.51835779s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-109944 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-109944" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-109944
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-109944: (2.139441214s)
--- PASS: TestKicCustomSubnet (31.68s)

                                                
                                    
x
+
TestKicStaticIP (37.66s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-585457 --static-ip=192.168.200.200
E1003 19:04:45.417322  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-585457 --static-ip=192.168.200.200: (35.38671892s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-585457 ip
helpers_test.go:175: Cleaning up "static-ip-585457" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-585457
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-585457: (2.127357997s)
--- PASS: TestKicStaticIP (37.66s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (71.83s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-780830 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-780830 --driver=docker  --container-runtime=crio: (32.520774919s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-783702 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-783702 --driver=docker  --container-runtime=crio: (33.978567587s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-780830
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-783702
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-783702" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-783702
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-783702: (1.973355461s)
helpers_test.go:175: Cleaning up "first-780830" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-780830
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-780830: (1.940780481s)
--- PASS: TestMinikubeProfile (71.83s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.82s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-331382 --memory=3072 --mount-string /tmp/TestMountStartserial538149677/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-331382 --memory=3072 --mount-string /tmp/TestMountStartserial538149677/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.822382453s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.82s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-331382 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.16s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-333761 --memory=3072 --mount-string /tmp/TestMountStartserial538149677/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-333761 --memory=3072 --mount-string /tmp/TestMountStartserial538149677/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.158813916s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.16s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-333761 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-331382 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-331382 --alsologtostderr -v=5: (1.61660676s)
--- PASS: TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-333761 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-333761
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-333761: (1.219029767s)
--- PASS: TestMountStart/serial/Stop (1.22s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.85s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-333761
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-333761: (6.848388853s)
--- PASS: TestMountStart/serial/RestartStopped (7.85s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-333761 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (136.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-801839 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1003 19:06:51.958041  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/functional-680560/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 19:07:48.486471  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 19:08:15.029813  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/functional-680560/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-801839 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (2m15.762762662s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-801839 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (136.26s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-801839 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-801839 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-801839 -- rollout status deployment/busybox: (3.194588797s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-801839 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-801839 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-801839 -- exec busybox-7b57f96db7-hs5gp -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-801839 -- exec busybox-7b57f96db7-sdzqd -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-801839 -- exec busybox-7b57f96db7-hs5gp -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-801839 -- exec busybox-7b57f96db7-sdzqd -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-801839 -- exec busybox-7b57f96db7-hs5gp -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-801839 -- exec busybox-7b57f96db7-sdzqd -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.95s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-801839 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-801839 -- exec busybox-7b57f96db7-hs5gp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-801839 -- exec busybox-7b57f96db7-hs5gp -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-801839 -- exec busybox-7b57f96db7-sdzqd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-801839 -- exec busybox-7b57f96db7-sdzqd -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.90s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (58.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-801839 -v=5 --alsologtostderr
E1003 19:09:45.417222  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-801839 -v=5 --alsologtostderr: (57.975225638s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-801839 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (58.72s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-801839 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.69s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-801839 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-801839 cp testdata/cp-test.txt multinode-801839:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-801839 ssh -n multinode-801839 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-801839 cp multinode-801839:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile418533726/001/cp-test_multinode-801839.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-801839 ssh -n multinode-801839 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-801839 cp multinode-801839:/home/docker/cp-test.txt multinode-801839-m02:/home/docker/cp-test_multinode-801839_multinode-801839-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-801839 ssh -n multinode-801839 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-801839 ssh -n multinode-801839-m02 "sudo cat /home/docker/cp-test_multinode-801839_multinode-801839-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-801839 cp multinode-801839:/home/docker/cp-test.txt multinode-801839-m03:/home/docker/cp-test_multinode-801839_multinode-801839-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-801839 ssh -n multinode-801839 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-801839 ssh -n multinode-801839-m03 "sudo cat /home/docker/cp-test_multinode-801839_multinode-801839-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-801839 cp testdata/cp-test.txt multinode-801839-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-801839 ssh -n multinode-801839-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-801839 cp multinode-801839-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile418533726/001/cp-test_multinode-801839-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-801839 ssh -n multinode-801839-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-801839 cp multinode-801839-m02:/home/docker/cp-test.txt multinode-801839:/home/docker/cp-test_multinode-801839-m02_multinode-801839.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-801839 ssh -n multinode-801839-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-801839 ssh -n multinode-801839 "sudo cat /home/docker/cp-test_multinode-801839-m02_multinode-801839.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-801839 cp multinode-801839-m02:/home/docker/cp-test.txt multinode-801839-m03:/home/docker/cp-test_multinode-801839-m02_multinode-801839-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-801839 ssh -n multinode-801839-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-801839 ssh -n multinode-801839-m03 "sudo cat /home/docker/cp-test_multinode-801839-m02_multinode-801839-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-801839 cp testdata/cp-test.txt multinode-801839-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-801839 ssh -n multinode-801839-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-801839 cp multinode-801839-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile418533726/001/cp-test_multinode-801839-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-801839 ssh -n multinode-801839-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-801839 cp multinode-801839-m03:/home/docker/cp-test.txt multinode-801839:/home/docker/cp-test_multinode-801839-m03_multinode-801839.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-801839 ssh -n multinode-801839-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-801839 ssh -n multinode-801839 "sudo cat /home/docker/cp-test_multinode-801839-m03_multinode-801839.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-801839 cp multinode-801839-m03:/home/docker/cp-test.txt multinode-801839-m02:/home/docker/cp-test_multinode-801839-m03_multinode-801839-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-801839 ssh -n multinode-801839-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-801839 ssh -n multinode-801839-m02 "sudo cat /home/docker/cp-test_multinode-801839-m03_multinode-801839-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.23s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-801839 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-801839 node stop m03: (1.234682161s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-801839 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-801839 status: exit status 7 (539.595619ms)

                                                
                                                
-- stdout --
	multinode-801839
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-801839-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-801839-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-801839 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-801839 status --alsologtostderr: exit status 7 (531.320156ms)

                                                
                                                
-- stdout --
	multinode-801839
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-801839-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-801839-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 19:10:14.868607  391754 out.go:360] Setting OutFile to fd 1 ...
	I1003 19:10:14.868715  391754 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 19:10:14.868754  391754 out.go:374] Setting ErrFile to fd 2...
	I1003 19:10:14.868760  391754 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 19:10:14.869023  391754 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-284583/.minikube/bin
	I1003 19:10:14.869212  391754 out.go:368] Setting JSON to false
	I1003 19:10:14.869248  391754 mustload.go:65] Loading cluster: multinode-801839
	I1003 19:10:14.869316  391754 notify.go:220] Checking for updates...
	I1003 19:10:14.870228  391754 config.go:182] Loaded profile config "multinode-801839": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 19:10:14.870250  391754 status.go:174] checking status of multinode-801839 ...
	I1003 19:10:14.870830  391754 cli_runner.go:164] Run: docker container inspect multinode-801839 --format={{.State.Status}}
	I1003 19:10:14.889355  391754 status.go:371] multinode-801839 host status = "Running" (err=<nil>)
	I1003 19:10:14.889380  391754 host.go:66] Checking if "multinode-801839" exists ...
	I1003 19:10:14.889674  391754 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-801839
	I1003 19:10:14.911538  391754 host.go:66] Checking if "multinode-801839" exists ...
	I1003 19:10:14.911831  391754 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 19:10:14.911873  391754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-801839
	I1003 19:10:14.934702  391754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33273 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/multinode-801839/id_rsa Username:docker}
	I1003 19:10:15.032053  391754 ssh_runner.go:195] Run: systemctl --version
	I1003 19:10:15.042027  391754 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 19:10:15.059360  391754 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 19:10:15.117473  391754 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-03 19:10:15.107458769 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1003 19:10:15.118488  391754 kubeconfig.go:125] found "multinode-801839" server: "https://192.168.67.2:8443"
	I1003 19:10:15.118531  391754 api_server.go:166] Checking apiserver status ...
	I1003 19:10:15.118581  391754 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1003 19:10:15.130406  391754 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1216/cgroup
	I1003 19:10:15.139236  391754 api_server.go:182] apiserver freezer: "4:freezer:/docker/b418e9f1b74a75598e6a82170be2b99382869d390d34ad16891c96a0289cd8ee/crio/crio-ed87587ec854a275843d9b2a8557d7e1ee8e23a1dae766f5d809137b3ad6e8d8"
	I1003 19:10:15.139311  391754 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/b418e9f1b74a75598e6a82170be2b99382869d390d34ad16891c96a0289cd8ee/crio/crio-ed87587ec854a275843d9b2a8557d7e1ee8e23a1dae766f5d809137b3ad6e8d8/freezer.state
	I1003 19:10:15.149637  391754 api_server.go:204] freezer state: "THAWED"
	I1003 19:10:15.149663  391754 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1003 19:10:15.158011  391754 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1003 19:10:15.158041  391754 status.go:463] multinode-801839 apiserver status = Running (err=<nil>)
	I1003 19:10:15.158053  391754 status.go:176] multinode-801839 status: &{Name:multinode-801839 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1003 19:10:15.158081  391754 status.go:174] checking status of multinode-801839-m02 ...
	I1003 19:10:15.158393  391754 cli_runner.go:164] Run: docker container inspect multinode-801839-m02 --format={{.State.Status}}
	I1003 19:10:15.177028  391754 status.go:371] multinode-801839-m02 host status = "Running" (err=<nil>)
	I1003 19:10:15.177060  391754 host.go:66] Checking if "multinode-801839-m02" exists ...
	I1003 19:10:15.177402  391754 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-801839-m02
	I1003 19:10:15.195055  391754 host.go:66] Checking if "multinode-801839-m02" exists ...
	I1003 19:10:15.195377  391754 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1003 19:10:15.195434  391754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-801839-m02
	I1003 19:10:15.213824  391754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33278 SSHKeyPath:/home/jenkins/minikube-integration/21625-284583/.minikube/machines/multinode-801839-m02/id_rsa Username:docker}
	I1003 19:10:15.309958  391754 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1003 19:10:15.324197  391754 status.go:176] multinode-801839-m02 status: &{Name:multinode-801839-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1003 19:10:15.324232  391754 status.go:174] checking status of multinode-801839-m03 ...
	I1003 19:10:15.324541  391754 cli_runner.go:164] Run: docker container inspect multinode-801839-m03 --format={{.State.Status}}
	I1003 19:10:15.342603  391754 status.go:371] multinode-801839-m03 host status = "Stopped" (err=<nil>)
	I1003 19:10:15.342655  391754 status.go:384] host is not running, skipping remaining checks
	I1003 19:10:15.342664  391754 status.go:176] multinode-801839-m03 status: &{Name:multinode-801839-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.31s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-801839 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-801839 node start m03 -v=5 --alsologtostderr: (7.210294077s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-801839 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.99s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (78.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-801839
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-801839
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-801839: (24.721908363s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-801839 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-801839 --wait=true -v=5 --alsologtostderr: (53.466889635s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-801839
--- PASS: TestMultiNode/serial/RestartKeepsNodes (78.32s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-801839 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-801839 node delete m03: (4.933196704s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-801839 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.65s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-801839 stop
E1003 19:11:51.960764  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/functional-680560/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-801839 stop: (23.789331008s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-801839 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-801839 status: exit status 7 (102.532358ms)

                                                
                                                
-- stdout --
	multinode-801839
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-801839-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-801839 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-801839 status --alsologtostderr: exit status 7 (87.103092ms)

                                                
                                                
-- stdout --
	multinode-801839
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-801839-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 19:12:11.246312  399474 out.go:360] Setting OutFile to fd 1 ...
	I1003 19:12:11.246507  399474 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 19:12:11.246536  399474 out.go:374] Setting ErrFile to fd 2...
	I1003 19:12:11.246560  399474 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 19:12:11.246836  399474 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-284583/.minikube/bin
	I1003 19:12:11.247054  399474 out.go:368] Setting JSON to false
	I1003 19:12:11.247115  399474 mustload.go:65] Loading cluster: multinode-801839
	I1003 19:12:11.247197  399474 notify.go:220] Checking for updates...
	I1003 19:12:11.247552  399474 config.go:182] Loaded profile config "multinode-801839": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 19:12:11.247589  399474 status.go:174] checking status of multinode-801839 ...
	I1003 19:12:11.248373  399474 cli_runner.go:164] Run: docker container inspect multinode-801839 --format={{.State.Status}}
	I1003 19:12:11.267079  399474 status.go:371] multinode-801839 host status = "Stopped" (err=<nil>)
	I1003 19:12:11.267101  399474 status.go:384] host is not running, skipping remaining checks
	I1003 19:12:11.267107  399474 status.go:176] multinode-801839 status: &{Name:multinode-801839 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1003 19:12:11.267131  399474 status.go:174] checking status of multinode-801839-m02 ...
	I1003 19:12:11.267438  399474 cli_runner.go:164] Run: docker container inspect multinode-801839-m02 --format={{.State.Status}}
	I1003 19:12:11.283927  399474 status.go:371] multinode-801839-m02 host status = "Stopped" (err=<nil>)
	I1003 19:12:11.283945  399474 status.go:384] host is not running, skipping remaining checks
	I1003 19:12:11.283951  399474 status.go:176] multinode-801839-m02 status: &{Name:multinode-801839-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.98s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (52.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-801839 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-801839 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (51.43508846s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-801839 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (52.12s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (36.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-801839
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-801839-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-801839-m02 --driver=docker  --container-runtime=crio: exit status 14 (94.177321ms)

                                                
                                                
-- stdout --
	* [multinode-801839-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21625
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21625-284583/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-284583/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-801839-m02' is duplicated with machine name 'multinode-801839-m02' in profile 'multinode-801839'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-801839-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-801839-m03 --driver=docker  --container-runtime=crio: (34.097172111s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-801839
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-801839: exit status 80 (344.491763ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-801839 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-801839-m03 already exists in multinode-801839-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-801839-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-801839-m03: (1.960971037s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (36.55s)

                                                
                                    
x
+
TestPreload (125.75s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-354959 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
E1003 19:14:45.417248  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-354959 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (1m2.435754376s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-354959 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-354959 image pull gcr.io/k8s-minikube/busybox: (2.252405315s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-354959
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-354959: (5.8141199s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-354959 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:65: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-354959 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (52.733078785s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-354959 image list
helpers_test.go:175: Cleaning up "test-preload-354959" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-354959
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-354959: (2.288894167s)
--- PASS: TestPreload (125.75s)

                                                
                                    
x
+
TestScheduledStopUnix (112.04s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-486161 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-486161 --memory=3072 --driver=docker  --container-runtime=crio: (35.73920786s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-486161 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-486161 -n scheduled-stop-486161
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-486161 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1003 19:16:26.074953  286434 retry.go:31] will retry after 111.877µs: open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/scheduled-stop-486161/pid: no such file or directory
I1003 19:16:26.075488  286434 retry.go:31] will retry after 219.259µs: open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/scheduled-stop-486161/pid: no such file or directory
I1003 19:16:26.076625  286434 retry.go:31] will retry after 268.529µs: open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/scheduled-stop-486161/pid: no such file or directory
I1003 19:16:26.077754  286434 retry.go:31] will retry after 272.528µs: open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/scheduled-stop-486161/pid: no such file or directory
I1003 19:16:26.078889  286434 retry.go:31] will retry after 460.655µs: open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/scheduled-stop-486161/pid: no such file or directory
I1003 19:16:26.080016  286434 retry.go:31] will retry after 639.442µs: open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/scheduled-stop-486161/pid: no such file or directory
I1003 19:16:26.081139  286434 retry.go:31] will retry after 1.541734ms: open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/scheduled-stop-486161/pid: no such file or directory
I1003 19:16:26.083343  286434 retry.go:31] will retry after 2.15627ms: open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/scheduled-stop-486161/pid: no such file or directory
I1003 19:16:26.086558  286434 retry.go:31] will retry after 3.833963ms: open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/scheduled-stop-486161/pid: no such file or directory
I1003 19:16:26.090805  286434 retry.go:31] will retry after 2.519403ms: open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/scheduled-stop-486161/pid: no such file or directory
I1003 19:16:26.094033  286434 retry.go:31] will retry after 6.781551ms: open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/scheduled-stop-486161/pid: no such file or directory
I1003 19:16:26.101269  286434 retry.go:31] will retry after 10.493588ms: open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/scheduled-stop-486161/pid: no such file or directory
I1003 19:16:26.112501  286434 retry.go:31] will retry after 16.718514ms: open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/scheduled-stop-486161/pid: no such file or directory
I1003 19:16:26.129689  286434 retry.go:31] will retry after 28.63776ms: open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/scheduled-stop-486161/pid: no such file or directory
I1003 19:16:26.160366  286434 retry.go:31] will retry after 23.91249ms: open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/scheduled-stop-486161/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-486161 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-486161 -n scheduled-stop-486161
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-486161
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-486161 --schedule 15s
E1003 19:16:51.959469  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/functional-680560/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-486161
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-486161: exit status 7 (71.330691ms)

                                                
                                                
-- stdout --
	scheduled-stop-486161
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-486161 -n scheduled-stop-486161
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-486161 -n scheduled-stop-486161: exit status 7 (68.203492ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-486161" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-486161
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-486161: (4.719103769s)
--- PASS: TestScheduledStopUnix (112.04s)

                                                
                                    
x
+
TestInsufficientStorage (13.13s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-308207 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-308207 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.684459085s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"dba62240-8f5c-4ec4-8f3d-0a4567988a06","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-308207] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"52baa29c-7f5b-4b98-bd63-356918ace8dd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21625"}}
	{"specversion":"1.0","id":"ff422a74-0f2a-43aa-ac50-62bcfdab02e7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"19a0b0db-1cac-4585-9564-098e8f06d831","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21625-284583/kubeconfig"}}
	{"specversion":"1.0","id":"817d27c2-1ab8-4f73-bf65-d78a5e1540d7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-284583/.minikube"}}
	{"specversion":"1.0","id":"7059b4ee-1632-4696-a804-99917deffd3e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"f55bfd5e-0055-47bc-adf0-b016e1966326","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"97bc0741-0963-4ed6-82c4-daa9eb0da4e7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"8e482c15-23c2-4a74-b37e-8950d20fcc68","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"aadb06a5-85ba-469b-b9e9-f6dd0b721a40","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"03f415d0-ffc8-4494-a572-298d2fb220c6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"3e2824bb-aa86-4e00-861d-ab3c1cd8dfcc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-308207\" primary control-plane node in \"insufficient-storage-308207\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"17dfa6a1-c334-46f7-be8a-51b29cef1298","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1759382731-21643 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"870c1774-4192-451c-915f-9ba71acb0783","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"f008c8d4-645c-48a4-ae99-dcdf07d0bcc5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-308207 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-308207 --output=json --layout=cluster: exit status 7 (287.931107ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-308207","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-308207","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1003 19:17:52.827213  415578 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-308207" does not appear in /home/jenkins/minikube-integration/21625-284583/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-308207 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-308207 --output=json --layout=cluster: exit status 7 (286.024366ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-308207","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-308207","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1003 19:17:53.113983  415644 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-308207" does not appear in /home/jenkins/minikube-integration/21625-284583/kubeconfig
	E1003 19:17:53.123753  415644 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/insufficient-storage-308207/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-308207" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-308207
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-308207: (1.866240413s)
--- PASS: TestInsufficientStorage (13.13s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (56.79s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.522345670 start -p running-upgrade-024862 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.522345670 start -p running-upgrade-024862 --memory=3072 --vm-driver=docker  --container-runtime=crio: (34.97555578s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-024862 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1003 19:21:51.958707  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/functional-680560/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-024862 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (18.361509354s)
helpers_test.go:175: Cleaning up "running-upgrade-024862" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-024862
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-024862: (1.957150119s)
--- PASS: TestRunningBinaryUpgrade (56.79s)

                                                
                                    
x
+
TestKubernetesUpgrade (356.35s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-629875 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1003 19:19:45.417057  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-629875 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (43.359893979s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-629875
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-629875: (1.356202995s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-629875 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-629875 status --format={{.Host}}: exit status 7 (123.923753ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-629875 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-629875 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m37.451925881s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-629875 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-629875 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-629875 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (96.252087ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-629875] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21625
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21625-284583/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-284583/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-629875
	    minikube start -p kubernetes-upgrade-629875 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6298752 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-629875 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-629875 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-629875 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (31.876531226s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-629875" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-629875
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-629875: (1.989844713s)
--- PASS: TestKubernetesUpgrade (356.35s)

                                                
                                    
x
+
TestMissingContainerUpgrade (119.3s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.4088080772 start -p missing-upgrade-546147 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.4088080772 start -p missing-upgrade-546147 --memory=3072 --driver=docker  --container-runtime=crio: (1m4.103478609s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-546147
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-546147: (1.040769501s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-546147
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-546147 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-546147 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (51.230308999s)
helpers_test.go:175: Cleaning up "missing-upgrade-546147" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-546147
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-546147: (2.047588468s)
--- PASS: TestMissingContainerUpgrade (119.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-929800 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-929800 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (102.829206ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-929800] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21625
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21625-284583/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-284583/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (49.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-929800 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-929800 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (48.972404579s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-929800 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (49.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (34.78s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-929800 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-929800 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (32.608567291s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-929800 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-929800 status -o json: exit status 2 (293.847476ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-929800","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-929800
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-929800: (1.877335986s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (34.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.99s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-929800 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-929800 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (9.986357677s)
--- PASS: TestNoKubernetes/serial/Start (9.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-929800 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-929800 "sudo systemctl is-active --quiet service kubelet": exit status 1 (262.202326ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-929800
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-929800: (1.259930259s)
--- PASS: TestNoKubernetes/serial/Stop (1.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-929800 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-929800 --driver=docker  --container-runtime=crio: (7.206738338s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-929800 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-929800 "sudo systemctl is-active --quiet service kubelet": exit status 1 (369.634598ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.37s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.69s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.69s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (62.18s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.443912306 start -p stopped-upgrade-414530 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.443912306 start -p stopped-upgrade-414530 --memory=3072 --vm-driver=docker  --container-runtime=crio: (40.215789392s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.443912306 -p stopped-upgrade-414530 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.443912306 -p stopped-upgrade-414530 stop: (1.915642406s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-414530 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-414530 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (20.045851237s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (62.18s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.24s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-414530
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-414530: (1.235527776s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.24s)

                                                
                                    
x
+
TestPause/serial/Start (84.58s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-844729 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-844729 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m24.577916692s)
--- PASS: TestPause/serial/Start (84.58s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (24.75s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-844729 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-844729 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (24.70940909s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (24.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-388132 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-388132 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (200.001943ms)

                                                
                                                
-- stdout --
	* [false-388132] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21625
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21625-284583/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-284583/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1003 19:25:41.216198  453892 out.go:360] Setting OutFile to fd 1 ...
	I1003 19:25:41.216342  453892 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 19:25:41.216353  453892 out.go:374] Setting ErrFile to fd 2...
	I1003 19:25:41.216358  453892 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1003 19:25:41.216760  453892 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21625-284583/.minikube/bin
	I1003 19:25:41.217247  453892 out.go:368] Setting JSON to false
	I1003 19:25:41.218123  453892 start.go:130] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7693,"bootTime":1759511849,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1003 19:25:41.218233  453892 start.go:140] virtualization:  
	I1003 19:25:41.221735  453892 out.go:179] * [false-388132] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1003 19:25:41.225464  453892 out.go:179]   - MINIKUBE_LOCATION=21625
	I1003 19:25:41.225634  453892 notify.go:220] Checking for updates...
	I1003 19:25:41.231700  453892 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1003 19:25:41.234639  453892 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21625-284583/kubeconfig
	I1003 19:25:41.237517  453892 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21625-284583/.minikube
	I1003 19:25:41.240660  453892 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1003 19:25:41.243544  453892 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1003 19:25:41.246848  453892 config.go:182] Loaded profile config "force-systemd-flag-855981": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1003 19:25:41.246988  453892 driver.go:421] Setting default libvirt URI to qemu:///system
	I1003 19:25:41.279826  453892 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1003 19:25:41.279938  453892 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1003 19:25:41.344546  453892 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-03 19:25:41.334190648 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1003 19:25:41.344655  453892 docker.go:318] overlay module found
	I1003 19:25:41.347770  453892 out.go:179] * Using the docker driver based on user configuration
	I1003 19:25:41.350785  453892 start.go:304] selected driver: docker
	I1003 19:25:41.350807  453892 start.go:924] validating driver "docker" against <nil>
	I1003 19:25:41.350827  453892 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1003 19:25:41.354606  453892 out.go:203] 
	W1003 19:25:41.357629  453892 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1003 19:25:41.360538  453892 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-388132 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-388132

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-388132

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-388132

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-388132

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-388132

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-388132

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-388132

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-388132

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-388132

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-388132

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-388132"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-388132"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-388132"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-388132

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-388132"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-388132"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-388132" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-388132" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-388132" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-388132" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-388132" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-388132" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-388132" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-388132" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-388132"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-388132"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-388132"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-388132"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-388132"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-388132" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-388132" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-388132" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-388132"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-388132"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-388132"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-388132"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-388132"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-388132

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-388132"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-388132"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-388132"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-388132"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-388132"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-388132"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-388132"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-388132"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-388132"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-388132"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-388132"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-388132"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-388132"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-388132"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-388132"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-388132"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-388132"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-388132"

                                                
                                                
----------------------- debugLogs end: false-388132 [took: 3.318598542s] --------------------------------
helpers_test.go:175: Cleaning up "false-388132" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-388132
--- PASS: TestNetworkPlugins/group/false (3.66s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (62.69s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-174543 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-174543 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (1m2.684380976s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (62.69s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.69s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-174543 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [59ac2e32-e58b-4476-9428-b0694f51e499] Pending
helpers_test.go:352: "busybox" [59ac2e32-e58b-4476-9428-b0694f51e499] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [59ac2e32-e58b-4476-9428-b0694f51e499] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.006640231s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-174543 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.69s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-174543 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-174543 --alsologtostderr -v=3: (12.117005153s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (72.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-643397 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-643397 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m12.439318834s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (72.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-174543 -n old-k8s-version-174543
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-174543 -n old-k8s-version-174543: exit status 7 (86.552292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-174543 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (60.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-174543 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
E1003 19:36:51.958683  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/functional-680560/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-174543 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (59.992297392s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-174543 -n old-k8s-version-174543
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (60.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-4tgnz" [cc2e663a-4e2d-43a5-8475-8e8990ff0576] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003193989s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-4tgnz" [cc2e663a-4e2d-43a5-8475-8e8990ff0576] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003309216s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-174543 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-643397 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [94854f52-744f-499a-b87d-fc57eb32aae8] Pending
helpers_test.go:352: "busybox" [94854f52-744f-499a-b87d-fc57eb32aae8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [94854f52-744f-499a-b87d-fc57eb32aae8] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.004936712s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-643397 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-174543 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-643397 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-643397 --alsologtostderr -v=3: (12.0796868s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (89.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-327416 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-327416 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m29.973270614s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (89.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-643397 -n no-preload-643397
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-643397 -n no-preload-643397: exit status 7 (92.596505ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-643397 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (63.91s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-643397 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-643397 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m3.494443399s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-643397 -n no-preload-643397
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (63.91s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-8x6xp" [03197d5d-f14b-4903-acee-646f327e0394] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003740083s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-8x6xp" [03197d5d-f14b-4903-acee-646f327e0394] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003964971s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-643397 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-643397 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-327416 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [ac0dae91-bdf3-4c0b-b787-6ff828edd312] Pending
helpers_test.go:352: "busybox" [ac0dae91-bdf3-4c0b-b787-6ff828edd312] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [ac0dae91-bdf3-4c0b-b787-6ff828edd312] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.004772365s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-327416 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.45s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (88.52s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-842797 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-842797 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m28.518595542s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (88.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-327416 --alsologtostderr -v=3
E1003 19:39:45.417276  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-327416 --alsologtostderr -v=3: (12.034326553s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-327416 -n embed-certs-327416
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-327416 -n embed-certs-327416: exit status 7 (109.28512ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-327416 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (53.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-327416 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-327416 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (52.637220208s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-327416 -n embed-certs-327416
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (53.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-4hzk6" [4e9fe78a-88e3-4ce0-9e2e-9e4442ab2967] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003710745s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-4hzk6" [4e9fe78a-88e3-4ce0-9e2e-9e4442ab2967] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004014382s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-327416 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-327416 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.44s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-842797 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [8e5137cd-0a54-45cf-a04a-251fab3a1832] Pending
helpers_test.go:352: "busybox" [8e5137cd-0a54-45cf-a04a-251fab3a1832] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [8e5137cd-0a54-45cf-a04a-251fab3a1832] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.00385882s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-842797 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.44s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (46.53s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-277907 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-277907 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (46.528441102s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (46.53s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-842797 --alsologtostderr -v=3
E1003 19:41:06.218395  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/old-k8s-version-174543/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 19:41:07.500177  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/old-k8s-version-174543/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 19:41:08.491336  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 19:41:10.061585  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/old-k8s-version-174543/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 19:41:15.183229  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/old-k8s-version-174543/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-842797 --alsologtostderr -v=3: (12.047115774s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-842797 -n default-k8s-diff-port-842797
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-842797 -n default-k8s-diff-port-842797: exit status 7 (124.876238ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-842797 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (53.92s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-842797 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1003 19:41:25.425432  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/old-k8s-version-174543/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 19:41:35.037783  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/functional-680560/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 19:41:45.907365  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/old-k8s-version-174543/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-842797 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (53.499723264s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-842797 -n default-k8s-diff-port-842797
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (53.92s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-277907 --alsologtostderr -v=3
E1003 19:41:51.958514  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/functional-680560/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-277907 --alsologtostderr -v=3: (2.169786733s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-277907 -n newest-cni-277907
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-277907 -n newest-cni-277907: exit status 7 (77.316753ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-277907 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (14.79s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-277907 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-277907 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (14.306307743s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-277907 -n newest-cni-277907
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (14.79s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-277907 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-ll25f" [5b3fda27-6d63-4fd1-8e59-407c16cc358b] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005599283s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (87.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-388132 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-388132 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m27.859005877s)
--- PASS: TestNetworkPlugins/group/auto/Start (87.86s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-ll25f" [5b3fda27-6d63-4fd1-8e59-407c16cc358b] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004991714s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-842797 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-842797 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (62.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-388132 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E1003 19:42:37.226344  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 19:42:37.232849  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 19:42:37.244217  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 19:42:37.265566  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 19:42:37.307733  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 19:42:37.389135  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 19:42:37.550592  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 19:42:37.871918  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 19:42:38.514095  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 19:42:39.795384  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 19:42:42.357469  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 19:42:47.478832  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 19:42:57.720740  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 19:43:18.202686  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-388132 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m2.821142998s)
--- PASS: TestNetworkPlugins/group/flannel/Start (62.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-spn5d" [12126892-5dea-4275-abbf-c97aad6abe3c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003102805s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-388132 "pgrep -a kubelet"
I1003 19:43:42.957628  286434 config.go:182] Loaded profile config "flannel-388132": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-388132 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-877l2" [def64124-1353-46de-94cf-48dfb293103b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-877l2" [def64124-1353-46de-94cf-48dfb293103b] Running
E1003 19:43:48.791165  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/old-k8s-version-174543/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.003424786s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-388132 "pgrep -a kubelet"
I1003 19:43:45.981070  286434 config.go:182] Loaded profile config "auto-388132": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-388132 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-29m2l" [eb70fc0b-dcdb-4483-9a3c-6a2a202c236a] Pending
helpers_test.go:352: "netcat-cd4db9dbf-29m2l" [eb70fc0b-dcdb-4483-9a3c-6a2a202c236a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.002982643s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-388132 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-388132 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-388132 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-388132 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-388132 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-388132 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (72.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-388132 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-388132 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m12.623357634s)
--- PASS: TestNetworkPlugins/group/calico/Start (72.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (71.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-388132 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E1003 19:44:45.417102  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/addons-952140/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 19:45:21.087995  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-388132 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m11.920454431s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (71.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-4wdch" [dec6476b-dabb-4aeb-9a6c-5338606496d6] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-4wdch" [dec6476b-dabb-4aeb-9a6c-5338606496d6] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003763898s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-388132 "pgrep -a kubelet"
I1003 19:45:37.698420  286434 config.go:182] Loaded profile config "custom-flannel-388132": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-388132 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-2wlfj" [4ce28850-2786-417f-9248-3852fb1f0edd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-2wlfj" [4ce28850-2786-417f-9248-3852fb1f0edd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004212292s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-388132 "pgrep -a kubelet"
I1003 19:45:38.599440  286434 config.go:182] Loaded profile config "calico-388132": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-388132 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-qrj8b" [b31ec374-99ca-47c5-9fe8-0421ed628fc9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-qrj8b" [b31ec374-99ca-47c5-9fe8-0421ed628fc9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.003428051s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-388132 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-388132 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-388132 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-388132 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-388132 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-388132 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (91.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-388132 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-388132 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m31.626476396s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (91.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (52.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-388132 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E1003 19:46:32.633101  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/old-k8s-version-174543/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 19:46:35.636863  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/default-k8s-diff-port-842797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1003 19:46:51.959001  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/functional-680560/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-388132 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (52.954858459s)
--- PASS: TestNetworkPlugins/group/bridge/Start (52.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-388132 "pgrep -a kubelet"
I1003 19:47:12.055810  286434 config.go:182] Loaded profile config "bridge-388132": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-388132 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-m95zj" [cbb1411e-0e86-4824-89fa-0098436cfc21] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1003 19:47:16.599052  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/default-k8s-diff-port-842797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-m95zj" [cbb1411e-0e86-4824-89fa-0098436cfc21] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.004050489s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-388132 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-388132 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-388132 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (87.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-388132 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-388132 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m27.934480075s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (87.93s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-dgkp8" [4d651751-3bf1-4380-93cc-fd453229b2f1] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004006319s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-388132 "pgrep -a kubelet"
I1003 19:47:54.330058  286434 config.go:182] Loaded profile config "kindnet-388132": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-388132 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-9wnl6" [4d8678f7-a5de-4921-94b5-aed779194b5e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-9wnl6" [4d8678f7-a5de-4921-94b5-aed779194b5e] Running
E1003 19:48:04.931535  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/no-preload-643397/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004556812s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-388132 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-388132 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-388132 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-388132 "pgrep -a kubelet"
I1003 19:49:11.943606  286434 config.go:182] Loaded profile config "enable-default-cni-388132": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-388132 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-9gmfz" [274825fc-6998-43ad-8688-32b03de6b761] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-9gmfz" [274825fc-6998-43ad-8688-32b03de6b761] Running
E1003 19:49:17.578965  286434 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21625-284583/.minikube/profiles/flannel-388132/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.003541367s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-388132 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-388132 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-388132 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    

Test skip (30/326)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.42s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-526019 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-526019" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-526019
--- SKIP: TestDownloadOnlyKic (0.42s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-839513" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-839513
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-388132 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-388132

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-388132

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-388132

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-388132

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-388132

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-388132

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-388132

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-388132

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-388132

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-388132

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-388132"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-388132"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-388132"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-388132

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-388132"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-388132"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-388132" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-388132" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-388132" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-388132" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-388132" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-388132" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-388132" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-388132" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-388132"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-388132"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-388132"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-388132"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-388132"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-388132" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-388132" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-388132" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-388132"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-388132"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-388132"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-388132"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-388132"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-388132

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-388132"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-388132"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-388132"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-388132"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-388132"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-388132"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-388132"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-388132"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-388132"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-388132"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-388132"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-388132"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-388132"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-388132"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-388132"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-388132"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-388132"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-388132"

                                                
                                                
----------------------- debugLogs end: kubenet-388132 [took: 3.286430542s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-388132" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-388132
--- SKIP: TestNetworkPlugins/group/kubenet (3.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-388132 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-388132

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-388132

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-388132

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-388132

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-388132

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-388132

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-388132

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-388132

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-388132

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-388132

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-388132"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-388132"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-388132"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-388132

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-388132"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-388132"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-388132" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-388132" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-388132" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-388132" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-388132" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-388132" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-388132" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-388132" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-388132"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-388132"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-388132"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-388132"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-388132"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-388132

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-388132

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-388132" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-388132" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-388132

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-388132

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-388132" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-388132" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-388132" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-388132" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-388132" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-388132"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-388132"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-388132"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-388132"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-388132"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-388132

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-388132"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-388132"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-388132"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-388132"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-388132"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-388132"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-388132"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-388132"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-388132"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-388132"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-388132"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-388132"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-388132"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-388132"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-388132"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-388132"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-388132"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-388132" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-388132"

                                                
                                                
----------------------- debugLogs end: cilium-388132 [took: 3.784564063s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-388132" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-388132
--- SKIP: TestNetworkPlugins/group/cilium (3.94s)

                                                
                                    
Copied to clipboard